Commit 1f31f4e1 authored by brunner's avatar brunner
Browse files

domain as 9 regions, fractional plant types

parent cee00f6a
! CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
! Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
! updates of the code. See also: http://www.carbontracker.eu.
!
! This program is free software: you can redistribute it and/or modify it under the
! terms of the GNU General Public License as published by the Free Software Foundation,
! version 3. This program is distributed in the hope that it will be useful, but
! WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
! FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License along with this
! program. If not, see <http://www.gnu.org/licenses/>.
! author: Wouter Peters
!
! This is a blueprint for an rc-file used in CTDAS. Feel free to modify it, and please go to the main webpage for further documentation.
!
! Note that rc-files have the convention that commented lines start with an exclamation mark (!), while special lines start with a hashtag (#).
!
! When running the script start_ctdas.sh, this /.rc file will be copied to your run directory, and some items will be replaced for you.
! The result will be a nearly ready-to-go rc-file for your assimilation job. The entries and their meaning are explained by the comments below.
!
!
! HISTORY:
!
! Created on August 20th, 2013 by Wouter Peters
!
!
! The time for which to start and end the data assimilation experiment in format YYYY-MM-DD HH:MM:SS
! the following 3 lines are for initial start
time.start : 2013-04-01 00:00:00
time.finish : 2013-04-07 23:00:00
time.end : 2013-04-07 23:00:00
abs.time.start : 2013-04-01 00:00:00
! Whether to restart the CTDAS system from a previous cycle, or to start the sequence fresh. Valid entries are T/F/True/False/TRUE/FALSE
time.restart : F
da.restart.tstamp : 2013-04-01 00:00:00
! The length of a cycle is given in days, such that the integer 7 denotes the typically used weekly cycle. Valid entries are integers > 1
time.cycle : 7
! The number of cycles of lag to use for a smoother version of CTDAS. CarbonTracker CO2 typically uses 5 weeks of lag. Valid entries are integers > 0
time.nlag : 2
! The directory under which the code, input, and output will be stored. This is the base directory for a run. The word
! '/' will be replaced through the start_ctdas.sh script by a user-specified folder name. DO NOT REPLACE
run.name : 9reg
dir.da_run : /scratch/snx3000/parsenov/${run.name}
restartmap.dir : ${dir.da_run}/input
! The resources used to complete the data assimilation experiment. This depends on your computing platform.
! The number of cycles per job denotes how many cycles should be completed before starting a new process or job, this
! allows you to complete many cycles before resubmitting a job to the queue and having to wait again for resources.
! Valid entries are integers > 0
da.resources.ncycles_per_job : 1
! The ntasks specifies the number of threads to use for the MPI part of the code, if relevant. Note that the CTDAS code
! itself is not parallelized and the python code underlying CTDAS does not use multiple processors. The chosen observation
! operator though might use many processors, like TM5. Valid entries are integers > 0
da.resources.ntasks : 1
! This specifies the amount of wall-clock time to request for each job. Its value depends on your computing platform and might take
! any form appropriate for your system. Typically, HPC queueing systems allow you a certain number of hours of usage before
! your job is killed, and you are expected to finalize and submit a next job before that time. Valid entries are strings.
da.resources.ntime : 44:00:00
! The resource settings above will cause the creation of a job file in which 2 cycles will be run, and 30 threads
! are asked for a duration of 4 hours
!
! Info on the DA system used, this depends on your application of CTDAS and might refer to for instance CO2, or CH4 optimizations.
!
da.system : CarbonTracker
! The specific settings for your system are read from a separate rc-file, which points to the data directories, observations, etc
da.system.rc : da/rc/carbontracker_cosmo.rc
! This flag should probably be moved to the da.system.rc file. It denotes which type of filtering to use in the optimizer
da.system.localization : CT2007
! Info on the observation operator to be used, these keys help to identify the settings for the transport model in this case
da.obsoperator : cosmo
!
! The TM5 transport model is controlled by an rc-file as well. The value below refers to the configuration of the TM5 model to
! be used as observation operator in this experiment.
!
da.obsoperator.home : /store/empa/em05/parsenov/cosmo_processing_chain
da.bio.input : /store/empa/em05/parsenov/cosmo_input/vprm/processed
da.bg.input : /store/empa/em05/parsenov/cosmo_input/icbc/processed
da.obsoperator.rc : ${da.obsoperator.home}/tm5-ctdas-ei-zoom.rc
!forward.savestate.exceptsam : TRUE
!
! The number of ensemble members used in the experiment. Valid entries are integers > 2
!
da.optimizer.nmembers : 21
nparameters : 181
! Finally, info on the archive task, if any. Archive tasks are run after each cycle to ensure that the results of each cycle are
! preserved, even if you run on scratch space or a temporary disk. Since an experiment can take multiple weeks to complete, moving
! your results out of the way, or backing them up, is usually a good idea. Note that the tasks are commented and need to be uncommented
! to use this feature.
! The following key identifies that two archive tasks will be executed, one called 'alldata' and the other 'resultsonly'.
!task.rsync : alldata onlyresults
! The specifics for the first task.
! 1> Which source directories to back up. Valid entry is a list of folders separated by spaces
! 2> Which destination directory to use. Valid entries are a folder name, or server and folder name in rsync format as below
! 3> Which flags to add to the rsync command
! The settings below will result in an rsync command that looks like:
!
! rsync -auv -e ssh ${dir.da_run} you@yourserver.com:/yourfolder/
!
!task.rsync.alldata.sourcedirs : ${dir.da_run}
!task.rsync.alldata.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.alldata.flags g -auv -e ssh
! Repeated for rsync task 2, note that we only back up the analysis and output dirs here
!task.rsync.onlyresults.sourcedirs : ${dir.da_run}/analysis ${dir.da_run}/output
!task.rsync.onlyresults.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.onlyresults.flags : -auv -e ssh
...@@ -50,7 +50,7 @@ class Optimizer(object): ...@@ -50,7 +50,7 @@ class Optimizer(object):
self.nlag = dims[0] self.nlag = dims[0]
self.nmembers = dims[1] self.nmembers = dims[1]
# self.nparams = dims[2] # self.nparams = dims[2]
self.nparams = 23 self.nparams = 181
self.nobs = dims[3] self.nobs = dims[3]
self.create_matrices() self.create_matrices()
...@@ -109,9 +109,29 @@ class Optimizer(object): ...@@ -109,9 +109,29 @@ class Optimizer(object):
for n in range(self.nlag): for n in range(self.nlag):
samples = statevector.obs_to_assimilate[n] samples = statevector.obs_to_assimilate[n]
members = statevector.ensemble_members[n] members = statevector.ensemble_members[n]
# params_2d = members[0].param_values[:-1].reshape(9,22)
# params = np.zeros(shape=(9,20))
# for r in range(0,9):
# params[r,:] = np.delete(np.delete(params_2d[r,:],21),10)
# params = np.hstack((params.flatten(),members[0].param_values[-1])) # put back BG
# self.x[n * self.nparams:(n + 1) * self.nparams] = params
self.x[n * self.nparams:(n + 1) * self.nparams] = members[0].param_values self.x[n * self.nparams:(n + 1) * self.nparams] = members[0].param_values
# params = np.zeros(shape=(self.nmembers,181))
# params_3d = np.zeros(shape=(self.nmembers,9,20))
# for m,mm in enumerate(members):
# params_temp = mm.param_values[:-1].reshape(9,22)
# params_bg = mm.param_values[-1]
# for r in range(0,9):
# params_3d[m,r,:] = np.delete(np.delete(params_temp[r,:],21),10)
# params[m,:] = np.hstack((params_3d[m,:,:].flatten(),params_bg))
# params = params_3d.reshape(self.nmembers,181)
# self.X_prime[n * self.nparams:(n + 1) * self.nparams, :] = np.transpose(np.array([params[m,:] for m,mm in enumerate(members)]))
self.X_prime[n * self.nparams:(n + 1) * self.nparams, :] = np.transpose(np.array([m.param_values for m in members])) self.X_prime[n * self.nparams:(n + 1) * self.nparams, :] = np.transpose(np.array([m.param_values for m in members]))
#self.X_prime[n * self.nparams:(n + 1) * self.nparams, :] = np.transpose(np.array([np.delete(np.delete(m.param_values,21),10) for m in members]))
if samples != None: if samples != None:
self.rejection_threshold = samples.rejection_threshold self.rejection_threshold = samples.rejection_threshold
...@@ -154,6 +174,16 @@ class Optimizer(object): ...@@ -154,6 +174,16 @@ class Optimizer(object):
members = statevector.ensemble_members[n] members = statevector.ensemble_members[n]
for m, mem in enumerate(members): for m, mem in enumerate(members):
members[m].param_values[:] = self.X_prime[n * self.nparams:(n + 1) * self.nparams, m] + self.x[n * self.nparams:(n + 1) * self.nparams] members[m].param_values[:] = self.X_prime[n * self.nparams:(n + 1) * self.nparams, m] + self.x[n * self.nparams:(n + 1) * self.nparams]
# params = (self.X_prime[n * self.nparams:(n + 1) * self.nparams, m] + self.x[n * self.nparams:(n + 1) * self.nparams]) # 181
# params_bg = params[-1] # BG
# params = params[:-1] # remove BG for now
# params = params.reshape(9,20)
# params_holder = np.zeros((9,22))
# for r in range(0,8):
# params_holder[r,:] = np.insert(np.insert(params[r,:],20,0),10,0)
# members[m].param_values[:] = np.hstack((params_holder.flatten(),params_bg))
#members[m].param_values[:] = np.insert(np.insert(self.X_prime[n * self.nparams:(n + 1) * self.nparams, m] + self.x[n * self.nparams:(n + 1) * self.nparams],10,0),21,0)
logging.debug('Returning optimized data to the StateVector, setting "StateVector.isOptimized = True" ') logging.debug('Returning optimized data to the StateVector, setting "StateVector.isOptimized = True" ')
......
...@@ -57,19 +57,31 @@ class CO2StateVector(StateVector): ...@@ -57,19 +57,31 @@ class CO2StateVector(StateVector):
10. None 10. None
""" """
# fullcov = np.array([ \
# (1.00, 0.36, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.36, 1.00, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.16, 0.16, 1.00, 0.36, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.16, 0.16, 0.36, 1.00, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.16, 0.16, 0.16, 0.16, 1.00, 0.36, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.16, 0.16, 0.16, 0.16, 0.36, 1.00, 0.04, 0.04, 0.04, 0.01, 0.00), \
# (0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 1.00, 0.16, 0.16, 0.16, 0.00), \
#(0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 1.00, 0.16, 0.16, 0.00), \
# (0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 0.16, 1.00, 0.16, 0.00), \
# (0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.16, 0.16, 0.16, 1.00, 0.00), \
# # (0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.e-10) ])
# (0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00) ])
fullcov = np.array([ \ fullcov = np.array([ \
(1.00, 0.36, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \ (1.00, 0.36, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01), \
(0.36, 1.00, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \ (0.36, 1.00, 0.16, 0.16, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01), \
(0.16, 0.16, 1.00, 0.36, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \ (0.16, 0.16, 1.00, 0.36, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01), \
(0.16, 0.16, 0.36, 1.00, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01, 0.00), \ (0.16, 0.16, 0.36, 1.00, 0.16, 0.16, 0.04, 0.04, 0.04, 0.01), \
(0.16, 0.16, 0.16, 0.16, 1.00, 0.36, 0.04, 0.04, 0.04, 0.01, 0.00), \ (0.16, 0.16, 0.16, 0.16, 1.00, 0.36, 0.04, 0.04, 0.04, 0.01), \
(0.16, 0.16, 0.16, 0.16, 0.36, 1.00, 0.04, 0.04, 0.04, 0.01, 0.00), \ (0.16, 0.16, 0.16, 0.16, 0.36, 1.00, 0.04, 0.04, 0.04, 0.01), \
(0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 1.00, 0.16, 0.16, 0.16, 0.00), \ (0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 1.00, 0.16, 0.16, 0.16), \
(0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 1.00, 0.16, 0.16, 0.00), \ (0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 1.00, 0.16, 0.16), \
(0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 0.16, 1.00, 0.16, 0.00), \ (0.04, 0.04, 0.04, 0.04, 0.04, 0.04, 0.16, 0.16, 1.00, 0.16), \
(0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.16, 0.16, 0.16, 1.00, 0.00), \ (0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.16, 0.16, 0.16, 1.00) ])
# (0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.e-10) ])
(0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00) ])
return fullcov return fullcov
......
...@@ -58,6 +58,9 @@ class ObservationOperator(object): ...@@ -58,6 +58,9 @@ class ObservationOperator(object):
def run(self,lag,dacycle,statevector): def run(self,lag,dacycle,statevector):
members = statevector.ensemble_members[lag] members = statevector.ensemble_members[lag]
self.forecast_nmembers = int(self.dacycle['da.optimizer.nmembers'])
#self.nparams = int(dacycle.dasystem['nparameters'])
self.nparams = int(self.dacycle['nparameters'])
absolute_start_time = str((to_datetime(dacycle['abs.time.start'])).strftime('%Y%m%d%H')) absolute_start_time = str((to_datetime(dacycle['abs.time.start'])).strftime('%Y%m%d%H'))
absolute_start_time_ch = str((to_datetime(dacycle['abs.time.start'])).strftime('%Y-%m-%d')) absolute_start_time_ch = str((to_datetime(dacycle['abs.time.start'])).strftime('%Y-%m-%d'))
starth = abs((to_datetime(dacycle['abs.time.start'])-dacycle['time.start']).days)*24 starth = abs((to_datetime(dacycle['abs.time.start'])-dacycle['time.start']).days)*24
...@@ -86,6 +89,7 @@ class ObservationOperator(object): ...@@ -86,6 +89,7 @@ class ObservationOperator(object):
savedict['units'] = "ppm" savedict['units'] = "ppm"
savedict['dims'] = dimid + dimmember savedict['dims'] = dimid + dimmember
savedict['comment'] = "Simulated model value created by COSMO" savedict['comment'] = "Simulated model value created by COSMO"
f.add_data(savedict,nsets=0) f.add_data(savedict,nsets=0)
# Open file with x,y,z,t of model samples that need to be sampled # Open file with x,y,z,t of model samples that need to be sampled
...@@ -123,34 +127,47 @@ class ObservationOperator(object): ...@@ -123,34 +127,47 @@ class ObservationOperator(object):
# for ncfile in ncfilelist: # for ncfile in ncfilelist:
# infile = os.path.join(ncfile + '.nc') # infile = os.path.join(ncfile + '.nc')
# UNCOMMENT FROM HERE self.lambda_file = os.path.join(self.outputdir, 'lambda.%s.nc' % self.dacycle['time.sample.stamp'])
ofile = Dataset(self.lambda_file, mode='w')
# co2_bg = np.empty(self.forecast_nmembers) opar = ofile.createDimension('param', self.nparams)
# omem = ofile.createDimension('member', self.forecast_nmembers)#len(members.nmembers))
# logging.info('Multiplying emissions with parameters for lag %d' % (lag))
# for dt in rrule.rrule(rrule.HOURLY, dtstart=dacycle['time.start']+timedelta(hours=24*lag*int(dacycle['time.cycle'])), until=dacycle['time.start']+timedelta(hours=(lag+1)*24*int(dacycle['time.cycle']))): l = ofile.createVariable('lambda', np.float32, ('member','param'),fill_value=-999.99)
# for ens in range(0,self.forecast_nmembers): co2 = np.empty(shape=(self.forecast_nmembers,self.nparams))
# dthh = dt.strftime('%H')
# co2_bg[ens] = members[ens].param_values[-1] for m in range(0,20):
# ens = str(ens).zfill(3) co2[m,:] = members[m].param_values
# cdo.setunit("'kg m-2 s-1' -expr,GPP_"+ens+"_F=CO2_GPP_F*parametermap -merge "+os.path.join(dacycle['da.bio.input'], 'gpp_%s.nc' % dt.strftime('%Y%m%d%H')), input = os.path.join(dacycle['dir.input'],"parameters_gpp_lag"+str(lag)+"."+ens+".nc"), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_"+ens+"_%s.nc" % dt.strftime('%Y%m%d%H'))) l[:] = co2
# cdo.setunit("'kg m-2 s-1' -expr,RESP_"+ens+"_F=CO2_RESP_F*parametermap -merge "+os.path.join(dacycle['da.bio.input'], 'ra_%s.nc' % dt.strftime('%Y%m%d%H')), input = os.path.join(dacycle['dir.input'],"parameters_resp_lag"+str(lag)+"."+ens+".nc"), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_"+ens+"_%s.nc" % dt.strftime('%Y%m%d%H'))) ofile.close()
# # logging.debug('Background CO2 params are (%s)' % co2_bg)
# if dthh=='00': # UN/COMMENT FROM HERE
# ct(dt.strftime('%Y%m%d'), co2_bg) co2_bg = np.empty(self.forecast_nmembers)
#
# cdo.merge(input = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_???_%s.nc" % dt.strftime('%Y%m%d%H')), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_%s.nc" % dt.strftime('%Y%m%d%H'))) logging.info('Multiplying emissions with parameters for lag %d' % (lag))
# cdo.merge(input = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_???_%s.nc" % dt.strftime('%Y%m%d%H')), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_%s.nc" % dt.strftime('%Y%m%d%H'))) for dt in rrule.rrule(rrule.HOURLY, dtstart=dacycle['time.start']+timedelta(hours=24*lag*int(dacycle['time.cycle'])), until=dacycle['time.start']+timedelta(hours=(lag+1)*24*int(dacycle['time.cycle']))):
# for ens in range(0,self.forecast_nmembers):
# os.chdir(dacycle['da.obsoperator.home']) dthh = dt.strftime('%H')
# co2_bg[ens] = members[ens].param_values[-1]
ens = str(ens).zfill(3)
cdo.setunit("'kg m-2 s-1' -expr,GPP_"+ens+"_F=CO2_GPP_F*parametermap -merge "+os.path.join(dacycle['da.bio.input'], 'gpp_%s.nc' % dt.strftime('%Y%m%d%H')), input = os.path.join(dacycle['dir.input'],"parameters_gpp_lag"+str(lag)+"."+ens+".nc"), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_"+ens+"_%s.nc" % dt.strftime('%Y%m%d%H')))
cdo.setunit("'kg m-2 s-1' -expr,RESP_"+ens+"_F=CO2_RESP_F*parametermap -merge "+os.path.join(dacycle['da.bio.input'], 'ra_%s.nc' % dt.strftime('%Y%m%d%H')), input = os.path.join(dacycle['dir.input'],"parameters_resp_lag"+str(lag)+"."+ens+".nc"), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_"+ens+"_%s.nc" % dt.strftime('%Y%m%d%H')))
# logging.debug('Background CO2 params are (%s)' % co2_bg)
if dthh=='00':
ct(dt.strftime('%Y%m%d'), co2_bg)
cdo.merge(input = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_???_%s.nc" % dt.strftime('%Y%m%d%H')), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "gpp_%s.nc" % dt.strftime('%Y%m%d%H')))
cdo.merge(input = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_???_%s.nc" % dt.strftime('%Y%m%d%H')), output = os.path.join(dacycle['da.bio.input'], 'ensemble', "ra_%s.nc" % dt.strftime('%Y%m%d%H')))
os.chdir(dacycle['da.obsoperator.home'])
if os.path.exists(dacycle['dir.da_run']+'/'+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)+"/cosmo/output/"): if os.path.exists(dacycle['dir.da_run']+'/'+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)+"/cosmo/output/"):
if os.path.exists(dacycle['dir.da_run']+"/non_opt_"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)+"/cosmo/output/"): if os.path.exists(dacycle['dir.da_run']+"/non_opt_"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)+"/cosmo/output/"):
os.rename(dacycle['dir.da_run']+"/"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168), dacycle['dir.da_run']+"/old_non_opt_"+dacycle['time.start'].strftime('%Y%m%d%H')+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)) os.rename(dacycle['dir.da_run']+"/"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168), dacycle['dir.da_run']+"/old_non_opt_"+dacycle['time.start'].strftime('%Y%m%d%H')+"_"+str(starth+lag*168)+"_"+str(endh+lag*168))
else: else:
os.rename(dacycle['dir.da_run']+"/"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168), dacycle['dir.da_run']+"/non_opt_"+dacycle['time.start'].strftime('%Y%m%d%H')+"_"+str(starth+lag*168)+"_"+str(endh+lag*168)) os.rename(dacycle['dir.da_run']+"/"+absolute_start_time+"_"+str(starth+lag*168)+"_"+str(endh+lag*168), dacycle['dir.da_run']+"/non_opt_"+dacycle['time.start'].strftime('%Y%m%d%H')+"_"+str(starth+lag*168)+"_"+str(endh+lag*168))
os.system('python run_chain.py 21ens '+absolute_start_time_ch+' '+str(starth+lag*168)+' '+str(endh+lag*168)+' -j meteo icbc emissions biofluxes int2lm post_int2lm cosmo') os.system('python run_chain.py '+self.dacycle['run.name']+' '+absolute_start_time_ch+' '+str(starth+lag*168)+' '+str(endh+lag*168)+' -j meteo icbc emissions biofluxes int2lm post_int2lm cosmo')
logging.info('COSMO done!')
os.chdir(dacycle['dir.da_run']) os.chdir(dacycle['dir.da_run'])
args = [ args = [
...@@ -160,12 +177,10 @@ class ObservationOperator(object): ...@@ -160,12 +177,10 @@ class ObservationOperator(object):
with Pool(self.forecast_nmembers) as pool: with Pool(self.forecast_nmembers) as pool:
pool.starmap(self.extract_model_data, args) pool.starmap(self.extract_model_data, args)
# pool.close()
# pool.join()
for i in range(0,self.forecast_nmembers): for i in range(0,self.forecast_nmembers):
idx = str(i).zfill(3) idx = str(i).zfill(3)
# cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/51ens/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp']) # cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/OK_DONT_TOUCH/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp']) # last run with non-frac
cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp']) cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp'])
ifile = Dataset(cosmo_file, mode='r') ifile = Dataset(cosmo_file, mode='r')
model_data[i,:] = (np.squeeze(ifile.variables['CO2'][:])*29./44.01)*1E6 # in ppm model_data[i,:] = (np.squeeze(ifile.variables['CO2'][:])*29./44.01)*1E6 # in ppm
...@@ -235,7 +250,7 @@ class ObservationOperator(object): ...@@ -235,7 +250,7 @@ class ObservationOperator(object):
site_height.main(cosmo_out, str(ens), ss, time_stamp) site_height.main(cosmo_out, str(ens), ss, time_stamp)
cdo.intlevel("860", input = cosmo_out+"CO2_60lev_"+ens+"_lhw_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_lhw_"+time_stamp+".nc") cdo.intlevel("860", input = cosmo_out+"CO2_60lev_"+ens+"_lhw_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_lhw_"+time_stamp+".nc")
cdo.intlevel("797", input = cosmo_out+"CO2_60lev_"+ens+"_brm_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_brm_"+time_stamp+".nc") # this needs changing to 1009 (797 + 212) cdo.intlevel("1009", input = cosmo_out+"CO2_60lev_"+ens+"_brm_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_brm_"+time_stamp+".nc")
cdo.intlevel("3580", input = cosmo_out+"CO2_60lev_"+ens+"_jfj_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_jfj_"+time_stamp+".nc") cdo.intlevel("3580", input = cosmo_out+"CO2_60lev_"+ens+"_jfj_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_jfj_"+time_stamp+".nc")
cdo.intlevel("1205", input = cosmo_out+"CO2_60lev_"+ens+"_ssl_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_ssl_"+time_stamp+".nc") cdo.intlevel("1205", input = cosmo_out+"CO2_60lev_"+ens+"_ssl_"+time_stamp+".nc", output = cosmo_out+"modelled_"+ens+"_ssl_"+time_stamp+".nc")
......
...@@ -48,7 +48,9 @@ class CO2Optimizer(Optimizer): ...@@ -48,7 +48,9 @@ class CO2Optimizer(Optimizer):
self.localization = True self.localization = True
self.localizetype = 'CT2007' self.localizetype = 'CT2007'
#T-test values for two-tailed student's T-test using 95% confidence interval for some options of nmembers #T-test values for two-tailed student's T-test using 95% confidence interval for some options of nmembers
if self.nmembers == 50: if self.nmembers == 21:
self.tvalue = 2.08
elif self.nmembers == 50:
self.tvalue = 2.0086 self.tvalue = 2.0086
elif self.nmembers == 100: elif self.nmembers == 100:
self.tvalue = 1.9840 self.tvalue = 1.9840
......
This diff is collapsed.
...@@ -37,9 +37,16 @@ obs.sites.rc : ${obspack.input.dir}/sites_weights_ctdas.rc ...@@ -37,9 +37,16 @@ obs.sites.rc : ${obspack.input.dir}/sites_weights_ctdas.rc
!regtype : gridded_oif30 !regtype : gridded_oif30
nparameters : 11 nparameters : 11
!random.seed : 4385 !random.seed : 4385
regionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/regions_cosmo.nc !regionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/regions_cosmo.nc
extendedregionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/regions_cosmo.nc !extendedregionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/regions_cosmo.nc
!regionsfile : ${datadir}/covariances/gridded_NH/griddedNHparameters.nc
regionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/cosmo_9_reg_mittel_4.nc
extendedregionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/cosmo_9_reg_mittel_4.nc
!fracregionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/cosmo_9_tc.nc
!extendedregionsfile : /store/empa/em05/parsenov/CTDAS/ctdas-cosmo/da/analysis/cosmo_regions.nc
!regionsfile : /store/empa/em05/parsenov/carbontracker/da/analysis/griddedNHparameters_from_ingrid.nc
!random.seed.init: ${datadir}/randomseedinit.pickle !random.seed.init: ${datadir}/randomseedinit.pickle
! Include a naming scheme for the variables ! Include a naming scheme for the variables
......
...@@ -36,7 +36,7 @@ from da.cosmo.obspack_globalviewplus2 import ObsPackObservations ...@@ -36,7 +36,7 @@ from da.cosmo.obspack_globalviewplus2 import ObsPackObservations
from da.cosmo.optimizer import CO2Optimizer from da.cosmo.optimizer import CO2Optimizer
#from da.cosmo.observationoperator_parallel import ObservationOperator # does not fully work #from da.cosmo.observationoperator_parallel import ObservationOperator # does not fully work
from da.cosmo.observationoperator import ObservationOperator from da.cosmo.observationoperator import ObservationOperator
#from da.analysis.expand_fluxes import save_weekly_avg_1x1_data, save_weekly_avg_state_data, save_weekly_avg_tc_data, save_weekly_avg_ext_tc_data #from da.cosmo.expand_fluxes import save_weekly_avg_1x1_data, save_weekly_avg_state_data, save_weekly_avg_tc_data, save_weekly_avg_ext_tc_data
#from da.analysis.expand_molefractions import write_mole_fractions #from da.analysis.expand_molefractions import write_mole_fractions
...@@ -78,9 +78,9 @@ ensemble_smoother_pipeline(dacycle, platform, dasystem, samples, statevector, ob ...@@ -78,9 +78,9 @@ ensemble_smoother_pipeline(dacycle, platform, dasystem, samples, statevector, ob
################### All done, extra stuff can be added next, such as analysis ################### All done, extra stuff can be added next, such as analysis
########################################################################################## ##########################################################################################
logging.info(header + "All done. Cheers" + footer) logging.info(header + "All done. God bless" + footer)
sys.exit(0)
sys.exit(0)
save_weekly_avg_1x1_data(dacycle, statevector) save_weekly_avg_1x1_data(dacycle, statevector)
save_weekly_avg_state_data(dacycle, statevector) save_weekly_avg_state_data(dacycle, statevector)
save_weekly_avg_tc_data(dacycle, statevector) save_weekly_avg_tc_data(dacycle, statevector)
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment