Commit 45dbde5a authored by brunner's avatar brunner
Browse files

all ok

parent 494da80e
! CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
! Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
! updates of the code. See also: http://www.carbontracker.eu.
!
! This program is free software: you can redistribute it and/or modify it under the
! terms of the GNU General Public License as published by the Free Software Foundation,
! version 3. This program is distributed in the hope that it will be useful, but
! WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
! FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License along with this
! program. If not, see <http://www.gnu.org/licenses/>.
! author: Wouter Peters
!
! This is a blueprint for an rc-file used in CTDAS. Feel free to modify it, and please go to the main webpage for further documentation.
!
! Note that rc-files have the convention that commented lines start with an exclamation mark (!), while special lines start with a hashtag (#).
!
! When running the script start_ctdas.sh, this /.rc file will be copied to your run directory, and some items will be replaced for you.
! The result will be a nearly ready-to-go rc-file for your assimilation job. The entries and their meaning are explained by the comments below.
!
!
! HISTORY:
!
! Created on August 20th, 2013 by Wouter Peters
!
!
! The time for which to start and end the data assimilation experiment in format YYYY-MM-DD HH:MM:SS
! the following 3 lines are for initial start
time.start : 2013-04-01 00:00:00
time.finish : 2013-04-07 23:00:00
time.end : 2013-04-07 23:00:00
abs.time.start : 2013-04-01 00:00:00
! Whether to restart the CTDAS system from a previous cycle, or to start the sequence fresh. Valid entries are T/F/True/False/TRUE/FALSE
time.restart : F
da.restart.tstamp : 2013-04-01 00:00:00
! The length of a cycle is given in days, such that the integer 7 denotes the typically used weekly cycle. Valid entries are integers > 1
time.cycle : 7
! The number of cycles of lag to use for a smoother version of CTDAS. CarbonTracker CO2 typically uses 5 weeks of lag. Valid entries are integers > 0
time.nlag : 2
! The directory under which the code, input, and output will be stored. This is the base directory for a run. The word
! '/' will be replaced through the start_ctdas.sh script by a user-specified folder name. DO NOT REPLACE
run.name : 40_9reg
dir.da_run : /scratch/snx3000/parsenov/${run.name}
restartmap.dir : ${dir.da_run}/input
! The resources used to complete the data assimilation experiment. This depends on your computing platform.
! The number of cycles per job denotes how many cycles should be completed before starting a new process or job, this
! allows you to complete many cycles before resubmitting a job to the queue and having to wait again for resources.
! Valid entries are integers > 0
da.resources.ncycles_per_job : 1
! The ntasks specifies the number of threads to use for the MPI part of the code, if relevant. Note that the CTDAS code
! itself is not parallelized and the python code underlying CTDAS does not use multiple processors. The chosen observation
! operator though might use many processors, like TM5. Valid entries are integers > 0
da.resources.ntasks : 1
! This specifies the amount of wall-clock time to request for each job. Its value depends on your computing platform and might take
! any form appropriate for your system. Typically, HPC queueing systems allow you a certain number of hours of usage before
! your job is killed, and you are expected to finalize and submit a next job before that time. Valid entries are strings.
da.resources.ntime : 44:00:00
! The resource settings above will cause the creation of a job file in which 2 cycles will be run, and 30 threads
! are asked for a duration of 4 hours
!
! Info on the DA system used, this depends on your application of CTDAS and might refer to for instance CO2, or CH4 optimizations.
!
da.system : CarbonTracker
! The specific settings for your system are read from a separate rc-file, which points to the data directories, observations, etc
da.system.rc : da/rc/carbontracker_cosmo.rc
! This flag should probably be moved to the da.system.rc file. It denotes which type of filtering to use in the optimizer
da.system.localization : CT2007
! Info on the observation operator to be used, these keys help to identify the settings for the transport model in this case
da.obsoperator : cosmo
!
! The TM5 transport model is controlled by an rc-file as well. The value below refers to the configuration of the TM5 model to
! be used as observation operator in this experiment.
!
da.obsoperator.home : /store/empa/em05/parsenov/cosmo_processing_chain
da.bio.input : /store/empa/em05/parsenov/cosmo_input/vprm/processed
da.bg.input : /store/empa/em05/parsenov/cosmo_input/icbc/processed
da.obsoperator.rc : ${da.obsoperator.home}/tm5-ctdas-ei-zoom.rc
!forward.savestate.exceptsam : TRUE
!
! The number of ensemble members used in the experiment. Valid entries are integers > 2
!
da.optimizer.nmembers : 40
nparameters : 181
! Finally, info on the archive task, if any. Archive tasks are run after each cycle to ensure that the results of each cycle are
! preserved, even if you run on scratch space or a temporary disk. Since an experiment can take multiple weeks to complete, moving
! your results out of the way, or backing them up, is usually a good idea. Note that the tasks are commented and need to be uncommented
! to use this feature.
! The following key identifies that two archive tasks will be executed, one called 'alldata' and the other 'resultsonly'.
!task.rsync : alldata onlyresults
! The specifics for the first task.
! 1> Which source directories to back up. Valid entry is a list of folders separated by spaces
! 2> Which destination directory to use. Valid entries are a folder name, or server and folder name in rsync format as below
! 3> Which flags to add to the rsync command
! The settings below will result in an rsync command that looks like:
!
! rsync -auv -e ssh ${dir.da_run} you@yourserver.com:/yourfolder/
!
!task.rsync.alldata.sourcedirs : ${dir.da_run}
!task.rsync.alldata.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.alldata.flags g -auv -e ssh
! Repeated for rsync task 2, note that we only back up the analysis and output dirs here
!task.rsync.onlyresults.sourcedirs : ${dir.da_run}/analysis ${dir.da_run}/output
!task.rsync.onlyresults.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.onlyresults.flags : -auv -e ssh
! CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
! Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
! updates of the code. See also: http://www.carbontracker.eu.
!
! This program is free software: you can redistribute it and/or modify it under the
! terms of the GNU General Public License as published by the Free Software Foundation,
! version 3. This program is distributed in the hope that it will be useful, but
! WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
! FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License along with this
! program. If not, see <http://www.gnu.org/licenses/>.
! author: Wouter Peters
!
! This is a blueprint for an rc-file used in CTDAS. Feel free to modify it, and please go to the main webpage for further documentation.
!
! Note that rc-files have the convention that commented lines start with an exclamation mark (!), while special lines start with a hashtag (#).
!
! When running the script start_ctdas.sh, this /.rc file will be copied to your run directory, and some items will be replaced for you.
! The result will be a nearly ready-to-go rc-file for your assimilation job. The entries and their meaning are explained by the comments below.
!
!
! HISTORY:
!
! Created on August 20th, 2013 by Wouter Peters
!
!
! The time for which to start and end the data assimilation experiment in format YYYY-MM-DD HH:MM:SS
! the following 3 lines are for initial start
time.start : 2013-04-01 00:00:00
time.finish : 2013-04-07 23:00:00
time.end : 2013-04-07 23:00:00
abs.time.start : 2013-04-01 00:00:00
! Whether to restart the CTDAS system from a previous cycle, or to start the sequence fresh. Valid entries are T/F/True/False/TRUE/FALSE
time.restart : F
da.restart.tstamp : 2013-04-01 00:00:00
! The length of a cycle is given in days, such that the integer 7 denotes the typically used weekly cycle. Valid entries are integers > 1
time.cycle : 7
! The number of cycles of lag to use for a smoother version of CTDAS. CarbonTracker CO2 typically uses 5 weeks of lag. Valid entries are integers > 0
time.nlag : 2
! The directory under which the code, input, and output will be stored. This is the base directory for a run. The word
! '/' will be replaced through the start_ctdas.sh script by a user-specified folder name. DO NOT REPLACE
run.name : 51_9reg
dir.da_run : /scratch/snx3000/parsenov/${run.name}
restartmap.dir : ${dir.da_run}/input
! The resources used to complete the data assimilation experiment. This depends on your computing platform.
! The number of cycles per job denotes how many cycles should be completed before starting a new process or job, this
! allows you to complete many cycles before resubmitting a job to the queue and having to wait again for resources.
! Valid entries are integers > 0
da.resources.ncycles_per_job : 1
! The ntasks specifies the number of threads to use for the MPI part of the code, if relevant. Note that the CTDAS code
! itself is not parallelized and the python code underlying CTDAS does not use multiple processors. The chosen observation
! operator though might use many processors, like TM5. Valid entries are integers > 0
da.resources.ntasks : 1
! This specifies the amount of wall-clock time to request for each job. Its value depends on your computing platform and might take
! any form appropriate for your system. Typically, HPC queueing systems allow you a certain number of hours of usage before
! your job is killed, and you are expected to finalize and submit a next job before that time. Valid entries are strings.
da.resources.ntime : 44:00:00
! The resource settings above will cause the creation of a job file in which 2 cycles will be run, and 30 threads
! are asked for a duration of 4 hours
!
! Info on the DA system used, this depends on your application of CTDAS and might refer to for instance CO2, or CH4 optimizations.
!
da.system : CarbonTracker
! The specific settings for your system are read from a separate rc-file, which points to the data directories, observations, etc
da.system.rc : da/rc/carbontracker_cosmo.rc
! This flag should probably be moved to the da.system.rc file. It denotes which type of filtering to use in the optimizer
da.system.localization : CT2007
! Info on the observation operator to be used, these keys help to identify the settings for the transport model in this case
da.obsoperator : cosmo
!
! The TM5 transport model is controlled by an rc-file as well. The value below refers to the configuration of the TM5 model to
! be used as observation operator in this experiment.
!
da.obsoperator.home : /store/empa/em05/parsenov/cosmo_processing_chain
da.bio.input : /store/empa/em05/parsenov/cosmo_input/vprm/processed
da.bg.input : /store/empa/em05/parsenov/cosmo_input/icbc/processed
da.obsoperator.rc : ${da.obsoperator.home}/tm5-ctdas-ei-zoom.rc
!forward.savestate.exceptsam : TRUE
!
! The number of ensemble members used in the experiment. Valid entries are integers > 2
!
da.optimizer.nmembers : 51
nparameters : 181
! Finally, info on the archive task, if any. Archive tasks are run after each cycle to ensure that the results of each cycle are
! preserved, even if you run on scratch space or a temporary disk. Since an experiment can take multiple weeks to complete, moving
! your results out of the way, or backing them up, is usually a good idea. Note that the tasks are commented and need to be uncommented
! to use this feature.
! The following key identifies that two archive tasks will be executed, one called 'alldata' and the other 'resultsonly'.
!task.rsync : alldata onlyresults
! The specifics for the first task.
! 1> Which source directories to back up. Valid entry is a list of folders separated by spaces
! 2> Which destination directory to use. Valid entries are a folder name, or server and folder name in rsync format as below
! 3> Which flags to add to the rsync command
! The settings below will result in an rsync command that looks like:
!
! rsync -auv -e ssh ${dir.da_run} you@yourserver.com:/yourfolder/
!
!task.rsync.alldata.sourcedirs : ${dir.da_run}
!task.rsync.alldata.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.alldata.flags g -auv -e ssh
! Repeated for rsync task 2, note that we only back up the analysis and output dirs here
!task.rsync.onlyresults.sourcedirs : ${dir.da_run}/analysis ${dir.da_run}/output
!task.rsync.onlyresults.destinationdir : you@yourserver.com:/yourfolder/
!task.rsync.onlyresults.flags : -auv -e ssh
......@@ -99,7 +99,8 @@ da.obsoperator : cosmo
! be used as observation operator in this experiment.
!
da.obsoperator.home : /store/empa/em05/parsenov/cosmo_processing_chain
da.obsoperator.home : /store/empa/em05/parsenov/cosmo_my_prc_chain
!da.obsoperator.home : /store/empa/em05/parsenov/cosmo_processing_chain
da.bio.input : /store/empa/em05/parsenov/cosmo_input/vprm/processed
da.bg.input : /store/empa/em05/parsenov/cosmo_input/icbc/processed
da.obsoperator.rc : ${da.obsoperator.home}/tm5-ctdas-ei-zoom.rc
......
......@@ -27,6 +27,7 @@ sys.path.append(os.getcwd())
import logging
import numpy as np
#from da.cosmo.statevector_uniform import StateVector, EnsembleMember
#from da.cosmo.statevector_read_from_output import StateVector, EnsembleMember
from da.cosmo.statevector import StateVector, EnsembleMember
......
......@@ -136,6 +136,8 @@ class ObservationOperator(object):
co2 = np.empty(shape=(self.forecast_nmembers,self.nparams))
for m in range(0,20):
#for m in range(0,39):
#for m in range(0,str(self.dacycle['da.optimizer.nmembers'])-1):
co2[m,:] = members[m].param_values
l[:] = co2
ofile.close()
......@@ -170,27 +172,26 @@ class ObservationOperator(object):
logging.info('COSMO done!')
os.chdir(dacycle['dir.da_run'])
if not advance:
args = [
(dacycle, starth+168*lag, endh+168*lag-1, n)
for n in range(0,self.forecast_nmembers)
]
args = [
(dacycle, starth+168*lag, endh+168*lag-1, n)
for n in range(0,self.forecast_nmembers)
]
with Pool(self.forecast_nmembers) as pool:
pool.starmap(self.extract_model_data, args)
with Pool(self.forecast_nmembers) as pool:
pool.starmap(self.extract_model_data, args)
for i in range(0,self.forecast_nmembers):
idx = str(i).zfill(3)
for i in range(0,self.forecast_nmembers):
idx = str(i).zfill(3)
# cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/OK_DONT_TOUCH/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp']) # last run with non-frac
cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp'])
ifile = Dataset(cosmo_file, mode='r')
model_data[i,:] = (np.squeeze(ifile.variables['CO2'][:])*29./44.01)*1E6 # in ppm
ifile.close()
for j,data in enumerate(zip(ids,obs,mdm)):
f.variables['obs_num'][j] = data[0]
f.variables['flask'][j,:] = model_data[:,j]
f.close()
cosmo_file = os.path.join('/store/empa/em05/parsenov/cosmo_data/model_'+idx+'_%s.nc' % dacycle['time.sample.stamp'])
ifile = Dataset(cosmo_file, mode='r')
model_data[i,:] = (np.squeeze(ifile.variables['CO2'][:])*29./44.01)*1E6 # in ppm
ifile.close()
for j,data in enumerate(zip(ids,obs,mdm)):
f.variables['obs_num'][j] = data[0]
f.variables['flask'][j,:] = model_data[:,j]
f.close()
#### WARNING ACHTUNG PAZNJA POZOR VNEMANIE data[2] is model data mismatch (=1000) by default in tools/io4.py!!! pavle
......
......@@ -50,8 +50,10 @@ class CO2Optimizer(Optimizer):
#T-test values for two-tailed student's T-test using 95% confidence interval for some options of nmembers
if self.nmembers == 21:
self.tvalue = 2.08
elif self.nmembers == 50:
elif self.nmembers == 50 or self.nmembers == 51:
self.tvalue = 2.0086
elif self.nmembers == 40:
self.tvalue = 2.021
elif self.nmembers == 100:
self.tvalue = 1.9840
elif self.nmembers == 150:
......
......@@ -292,12 +292,17 @@ class StateVector(object):
# Create members 1:nmembers and add to ensemble_members list
for member in range(1, self.nmembers):
rands = np.random.uniform(low=-1., high=1., size=self.nparams-1)
rands_bg = np.random.uniform(low=-0.05, high=0.05, size=1)
# rands = np.random.uniform(low=-1., high=1., size=self.nparams-1)
# rands_bg = np.random.uniform(low=-0.05, high=0.05, size=1)
# rands = np.random.randn(self.nparams-1)
# rands_bg = np.random.randn(1)*1E-4 # x variance(1E-4)
rands = np.random.normal(loc=0.0, scale=1, size=self.nparams-1)
#rands = np.random.normal(loc=0.0, scale=0.7, size=self.nparams-1)
rands_bg = np.random.normal(loc=0.0, scale=0.05, size=1) #*1E-4 # x variance(1E-4)
# rands_bg = np.random.normal(loc=0.0, scale=1E-4, size=1) #*1E-4 # x variance(1E-4)
#rands_bg = np.random.normal(loc=0.0, scale=0.68, size=1)*1E-4 # x variance(1E-4)
newmember = EnsembleMember(member)
# rands = rands.reshape([9,20])
# randsC = np.zeros(shape=(9,20))
# for r in range(0,9):
# randsC[r,:] = np.hstack((np.dot(C, rands[r,0:10]),np.dot(C, rands[r,10:20])))
......@@ -305,7 +310,7 @@ class StateVector(object):
randsC = np.hstack((np.dot(C, rands[0:90]),np.dot(C, rands[90:180])))
newmember.param_values = (np.hstack((randsC.flatten(),rands_bg)) + newmean).ravel() #THIS ONE
newmember.param_values[newmember.param_values<0.] = 0.
newmember.param_values[newmember.param_values>2.] = 2.
# newmember.param_values[newmember.param_values>2.] = 2.
self.ensemble_members[lag].append(newmember)
logging.debug('%d new ensemble members were added to the state vector # %d' % (self.nmembers, (lag + 1)))
......
"""CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
updates of the code. See also: http://www.carbontracker.eu.
This program is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software Foundation,
version 3. This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this
program. If not, see <http://www.gnu.org/licenses/>."""
#!/usr/bin/env python
# ct_statevector_tools.py
"""
.. module:: statevector
.. moduleauthor:: Wouter Peters
Revision History:
File created on 28 Jul 2010.
The module statevector implements the data structure and methods needed to work with state vectors (a set of unknown parameters to be optimized by a DA system) of different lengths, types, and configurations. Two baseclasses together form a generic framework:
* :class:`~da.baseclasses.statevector.StateVector`
* :class:`~da.baseclasses.statevector.EnsembleMember`
As usual, specific implementations of StateVector objects are done through inheritance form these baseclasses. An example of designing
your own baseclass StateVector we refer to :ref:`tut_chapter5`.
.. autoclass:: da.baseclasses.statevector.StateVector
.. autoclass:: da.baseclasses.statevector.EnsembleMember
"""
import os
import logging
import numpy as np
from datetime import timedelta
import da.cosmo.io4 as io
from netCDF4 import Dataset
identifier = 'Baseclass Statevector '
version = '0.0'
################### Begin Class EnsembleMember ###################
class EnsembleMember(object):
"""
An ensemble member object consists of:
* a member number
* parameter values
* an observation object to hold sampled values for this member
Ensemble members are initialized by passing only an ensemble member number, all data is added by methods
from the :class:`~da.baseclasses.statevector.StateVector`. Ensemble member objects have almost no functionality
except to write their data to file using method :meth:`~da.baseclasses.statevector.EnsembleMember.write_to_file`
.. automethod:: da.baseclasses.statevector.EnsembleMember.__init__
.. automethod:: da.baseclasses.statevector.EnsembleMember.write_to_file
.. automethod:: da.baseclasses.statevector.EnsembleMember.AddCustomFields
"""
def __init__(self, membernumber):
"""
:param memberno: integer ensemble number
:rtype: None
An EnsembleMember object is initialized with only a number, and holds two attributes as containter for later
data:
* param_values, will hold the actual values of the parameters for this data
* ModelSample, will hold an :class:`~da.baseclasses.obs.Observation` object and the model samples resulting from this members' data
"""
self.membernumber = membernumber # the member number
self.param_values = None # Parameter values of this member
################### End Class EnsembleMember ###################
################### Begin Class StateVector ###################
class StateVector(object):
"""
The StateVector object first of all contains the data structure of a statevector, defined by 3 attributes that define the
dimensions of the problem in parameter space:
* nlag
* nparameters
* nmembers
The fourth important dimension `nobs` is not related to the StateVector directly but is initialized to 0, and later on
modified to be used in other parts of the pipeline:
* nobs
These values are set as soon as the :meth:`~da.baseclasses.statevector.StateVector.setup` is called from the :ref:`pipeline`.
Additionally, the value of attribute `isOptimized` is set to `False` indicating that the StateVector holds a-priori values
and has not been modified by the :ref:`optimizer`.
StateVector objects can be filled with data in two ways
1. By reading the data from file
2. By creating the data through a set of method calls
Option (1) is invoked using method :meth:`~da.baseclasses.statevector.StateVector.read_from_file`.
Option (2) consists of a call to method :meth:`~da.baseclasses.statevector.StateVector.make_new_ensemble`
Once the StateVector object has been filled with data, it is used in the pipeline and a few more methods are
invoked from there:
* :meth:`~da.baseclasses.statevector.StateVector.propagate`, to advance the StateVector from t=t to t=t+1
* :meth:`~da.baseclasses.statevector.StateVector.write_to_file`, to write the StateVector to a NetCDF file for later use
The methods are described below:
.. automethod:: da.baseclasses.statevector.StateVector.setup
.. automethod:: da.baseclasses.statevector.StateVector.read_from_file
.. automethod:: da.baseclasses.statevector.StateVector.write_to_file
.. automethod:: da.baseclasses.statevector.StateVector.make_new_ensemble
.. automethod:: da.baseclasses.statevector.StateVector.propagate
.. automethod:: da.baseclasses.statevector.StateVector.write_members_to_file
Finally, the StateVector can be mapped to a gridded array, or to a vector of TransCom regions, using:
.. automethod:: da.baseclasses.statevector.StateVector.grid2vector
.. automethod:: da.baseclasses.statevector.StateVector.vector2grid
.. automethod:: da.baseclasses.statevector.StateVector.vector2tc
.. automethod:: da.baseclasses.statevector.StateVector.state2tc
"""
def __init__(self):
self.ID = identifier
self.version = version
# The following code allows the object to be initialized with a dacycle object already present. Otherwise, it can
# be added at a later moment.
logging.info('Statevector object initialized: %s' % self.ID)
def setup(self, dacycle):
"""
setup the object by specifying the dimensions.
There are two major requirements for each statvector that you want to build:
(1) is that the statevector can map itself onto a regular grid
(2) is that the statevector can map itself (mean+covariance) onto TransCom regions
An example is given below.
"""
self.nlag = int(dacycle['time.nlag'])
self.nmembers = int(dacycle['da.optimizer.nmembers'])
# self.nparams = int(dacycle.dasystem['nparameters'])
self.nparams = 181
#self.nparams = 198
self.nobs = 0
self.obs_to_assimilate = () # empty containter to hold observations to assimilate later on
# These list objects hold the data for each time step of lag in the system. Note that the ensembles for each time step consist
# of lists of EnsembleMember objects, we define member 0 as the mean of the distribution and n=1,...,nmembers as the spread.
self.ensemble_members = list(range(self.nlag))
for n in range(self.nlag):
self.ensemble_members[n] = []
# This specifies the file to read with the gridded mask at 1x1 degrees. Each gridbox holds a number that specifies the parametermember
# that maps onto it. From this map, a dictionary is created that allows a reverse look-up so that we can map parameters to a grid.
mapfile = os.path.join(dacycle.dasystem['regionsfile'])
regfile = Dataset(mapfile,mode='r')
self.gridmap = np.squeeze(regfile.variables['regions'])
self.tcmap = np.squeeze(regfile.variables['transcom_regions'])
self.lat = np.squeeze(regfile.variables['latitude'])
self.lon = np.squeeze(regfile.variables['longitude'])
regfile.close()
logging.debug("A TransCom map on 1x1 degree was read from file %s" % dacycle.dasystem['regionsfile'])
logging.debug("A parameter map on 1x1 degree was read from file %s" % dacycle.dasystem['regionsfile'])
# Create a dictionary for state <-> gridded map conversions
# nparams = 198
nparams = 181
self.griddict = {}
for pft in range(1,18):
for r in range(1, 11):
sel = np.nonzero(self.gridmap[pft,:,:].flat == r)
if len(sel[0]) > 0:
self.griddict[pft,r] = sel
self.griddict[pft,r+11] = sel # pavle - expand dictionary for nparam values because of RESP
# pavle: sel sorts out regions by PFT
logging.debug("A dictionary to map grids to states and vice versa was created")
# Create a matrix for state <-> TransCom conversions
self.tcmatrix = np.zeros((self.nparams, 9), 'float')
for pft in range(0,17):
for r in range(1, self.nparams + 1):
sel = np.nonzero(self.gridmap[pft,:,:].flat == r)
if len(sel[0]) < 1:
continue
else:
n_tc = set(self.tcmap.flatten().take(sel[0]))
if len(n_tc) > 1:
logging.error("Parameter %d seems to map to multiple TransCom regions (%s), I do not know how to handle this" % (r, n_tc))
raise ValueError
self.tcmatrix[r - 1, int(n_tc.pop()) - 1] = 1.0
logging.debug("A matrix to map states to TransCom regions and vice versa was created")
# Create a mask for species/unknowns
self.make_species_mask()
def make_species_mask(self):
"""