Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • nearrealtimectdas/CTDAS
  • tsurata_a/CTDAS
  • peter050/CTDAS
  • woude033/CTDAS
  • flore019/CTDAS
  • laan035/CTDAS
  • koren007/CTDAS
  • smith036/CTDAS
  • ctdas/CTDAS
9 results
Select Git revision
Show changes
Commits on Source (444)
Showing
with 3520 additions and 3 deletions
# Exclude the files created from the templates:
templates/*
# Exclude compiled files
da/**/__pycache__/
da/**/*.pyc
da/**/*.bak
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>ctdas_light_refactor_new</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.python.pydev.PyDevBuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.python.pydev.pythonNature</nature>
</natures>
</projectDescription>
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?eclipse-pydev version="1.0"?>
<pydev_project>
<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
<path>/ctdas_light_refactor_new</path>
</pydev_pathproperty>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
</pydev_project>
This diff is collapsed.
......@@ -12,12 +12,12 @@ clean:
# -------------------
/bin/rm -f jb.[0-9]*.{jb,rc,log}
/bin/rm -f */*/jb.[0-9]*.{rc,jb,log}
/bin/rm -f *.pyc
/bin/rm -f */*/*.pyc
/bin/rm -f das.[0-9]*.log
/bin/rm -f *.log
/bin/rm -f *.[eo]*
/bin/rm -f *.p[eo]*
/bin/rm -f out.*
/bin/rm -f err.*
Dear CTDAS users,
We hope you’re all doing well! We would like to inform you that we have restructured our CTDAS code base on GIT.
This new structure is completely in python3 and follows the setup of the 7 classes defined in the 2017 GMD paper.
So e.g. instead of having a project-related folders like “TM5” or “STILT”, we now have folders for obsoperators and statevectors.
In the current master code on GIT, we have included baseclasses and the code for the standard CarbonTracker Europe CO2 version (labeled cteco2).
Other previous projects have been moved to a temporary “archive” folder,
from which each of you can take your project specific code and move it into the new structure.
We would like to encourage all users to make use of the GITlab repository as much as possible,
so that we can all benefit from each other’s work there.
Note that moving your code to the new structure is very little work,
it is a matter of moving files to the directories and changing the path for the import of the functions.
We have included further information below on the new structure and how to work with it on GIT.
Please take a look, and let us know in case you have further questions.
We hope that with this step, we can enhance the collaboration between all of us on this code base,
and we will therefore also send you an invitation to our slack channel code-ctdas,
so that we can keep in touch there with CTDAS code related questions.
Kind regards,
Ingrid, Wouter and Auke
===================================================
The new structure
The code has been updated to python3, and the structure is as follows:
The 7 components of the CTDAS system each have their own folder within the da folder.
- cyclecontrol - dasystems
- platform
- statevectors - observations - obsoperators - optimizers
Each of the 7 main folders contains the cteco2 specific code, and in some cases, there is a baseclass module from which project specific code can inherit the properties (e.g. in optimizers). Further folders are:
- pipelines (this contains the pipelines that are called in the main ctdas.py script, currently implemented for the cteco2 setup)
- analysis (here we have standard analysis tools, plus a subfolder for cteco2)
- tools
- preprocessing
- rc
- doc
- archive (this contains all code that was previously in project-folders, this can be
moved into the main new code, further details below).
Furthermore, in the main CTDAS folder, there are the start_ctdas and clone_ctdas shell scripts, and the main ctdas.py, ctdas.jb and ctdas.rc files in the templates folder.
Your project
In case you already had your project specific CTDAS code on GIT, please read this section on how to update to the new structure. New (CTDAS and/or GIT) users can read the sections below. Note that we prefer users to work on branches in the main code, rather than on independent forks, so that we can profit for each other’s developments.
Please follow these steps (two examples below for fork or branch):
1. Create a new temporary branch to make your project-specific code in the new structure
2. Get your changes from your original project-specific branch/fork into the new temporary structure branch
3. Merge request from temporary branch to the master
You can check if you are on the main code or a fork using:
`git remote -v`
In case you are on the main code, you will get something like:
`origin https://git.wur.nl/ctdas/CTDAS.git`
In case you are on a fork, you will get something like:
`origin https://git.wur.nl/woude033/CTDAS.git (fetch)`
`origin https://git.wur.nl/woude033/CTDAS.git (push)`
`upstream https://git.wur.nl/ctdas/CTDAS.git (fetch)`
`upstream https://git.wur.nl/ctdas/CTDAS.git (push)`
You can check on which branch you are as follows:
`git branch`
The output will look like:
` master`
` * feature-STILT`
Example: if you are working on a fork, start at step 0, else, if you are working in a branch of the main code (https://git.wur.nl/ctdas/CTDAS.git) e.g. called feature-STILT start at step 1.
0. In case you are working on a fork:
Make a new folder locally and get a new clone from the master of the main code:
`git clone https://git.wur.nl/ctdas/CTDAS`
1. Make a new temporary branch on the main code:
`git checkout -b temp_stilt_new_structure remotes/origin/master`
2. If you were working in a fork (else skip to step 3): merge your fork as a new temporary branch in the main code.
Go to https://git.wur.nl/ctdas/CTDAS and go to ‘merge requests’, and ‘click new merge request’. On the left, fill in your fork and on the right the main CTDAS code and your new temporary branch temp_stilt_new_structure.
3. Copy a file from your previous branch to the new temporary branch:
`git checkout feature-STILT stilt/optimizer.py`
4. Move this file to the correct place in the new structure:
`git mv stilt/optimizer.py optimizers/optimizer_stilt.py`
5. Make sure everything is working with this new code, and then:
`git add .`
`git commit -m “Added STILT optimizer” git push`
6. After you finish to implement and test all your changes: Go to https://git.wur.nl/ctdas/CTDAS and go to ‘merge requests’, and ‘click new merge request’.
====================================================
#### Some useful GIT commands:
1. Check if you’re on the master branch:
`git branch`
2. If you’re not on the master branch, switch to it:
`git checkout master`
3. Switching to another existing branch:
`git checkout feature-STILT`
4. If you want to compare your code to the master:
`git fetch`
`git status`
5. To get the changes from the remote master:
`git pull`
6. Moving a file:
`git mv stilt/optimizer.py optimizers/optimizer_stilt.py`
7. Committing changes:
`git add .`
`git commit -m “A message” git push`
8. If you want to replace your local master code with the remote master code, follow
these steps (warning! Your previous version will be lost!): git fetch origin
`git reset --hard origin/master`
9. Get changes to the master code in your branch:
`git merge origin/master`
#### Experienced CTDAS users, but new to GIT:
Follow the first steps for new users (see below) to get your copy of the current CTDAS code from GIT in a new folder. Then compare the changes in your local code and merge your changes into the new structure (see description above) in your temporary branch. It might happen that you want to create a copy of an existing file on GIT, that you adapt to your project. There is no git cp command, so to create a copy with GIT history, follow these
steps:
`cp optimizers/optimizer_cteco2.py optimizers/optimizer_ctesamco2.py git add optimizers/optimizer_ctesamco2.py`
`git commit -m “Added CTE South America CO2 optimizer”`
`git push`
##### Python3
To update your code to python3, the tool 2to3 is very convenient. See for more details:
https://docs.python.org/3/library/2to3.html
New CTDAS users
Make a folder, then:
`git clone https://git.wur.nl/ctdas/CTDAS`
Make your temporary branch to develop your code:
`git checkout -b temp_stilt_new_structure remotes/origin/master`
(Later on, you can do a merge request to merge your changes into the master branch).
The main code is found in the da folder. You will find subfolders for each of the components
of the CTDAS system (see information above about the structure). Furthermore, we have a
script to start a new run, as follows:
`./start_ctdas.sh /somedirectory/ name_of_your_run`
This will create a new folder for your run, and makes a copy of the da folder. Furthermore, the main ctdas.py will be created, named name_of_your_run.py, along with the main jb and rc file. These are based on the templates in the template folder. In this way, you will always have the current version of the code with your run. For further information, please refer to: https://www.carbontracker.eu/ctdas/
# CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
# Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
# updates of the code. See also: http://www.carbontracker.eu.
#
# This program is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software Foundation,
# version 3. This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with this
# program. If not, see <http://www.gnu.org/licenses/>.
#!/bin/bash
set -e
usage="$basename "$0" [arg1] [arg2] [arg3] [-h]
-- script to clone an existing CTDAS run and take aover all its settings. This allows a run to be forked, or continued separate from its origin
where:
arg1: base directory with original project (i.e., /scratch/"$USER"/)
arg2: original project name (i.e, test_ctdas)
arg3: cloned project name (i.e, real_ctdas_run)
-h shows this help text
! A new folder will then be created and populated:
/scratch/"$USER"/real_ctdas_run/
"
while getopts ':hs:' option; do
case "$option" in
h) echo "$usage"
exit
;;
:) printf "missing argument for -%s\n" "$OPTARG" >&3
echo "$usage" >&3
exit 1
;;
\?) printf "illegal option: -%s\n" "$OPTARG" >&3
echo "$usage" >&3
exit 1
;;
esac
done
EXPECTED_ARGS=3
if [[ $# -ne $EXPECTED_ARGS ]]; then
printf "Missing arguments to function, need $EXPECTED_ARGS \n\n"
echo "$usage"
exit 2
fi
echo "New project to be started in folder $1"
echo " ...........with name $3"
echo " ...........cloned from $1/$2"
sourcedir=$1/$2/exec
rundir=$1/$3/exec
sedrundir=$1/$3/exec
if [ -d "$rundir" ]; then
echo "Directory already exists, please remove before running $0"
exit 1
fi
mkdir -p ${rundir}
rsync -ru --cvs-exclude --exclude=*nc ${sourcedir}/* ${rundir}/
rsync -ru --cvs-exclude ${sourcedir}/da/analysis/cteco2/*nc ${rundir}/da/analysis/cteco2
cd ${rundir}
echo "Modifying jb file, py file, and rc-file"
sed -e "s/$2/$3/g" ${sourcedir}/$2.jb > $3.jb
sed -e "s/$2/$3/g" ${sourcedir}/$2.py > $3.py
sed -e "s/$2/$3/g" ${sourcedir}/$2.rc > $3.rc
rm -f clone_ctdas.sh
rm -f $2.jb $2.rc $2.py
make clean
chmod u+x $3.jb
echo ""
echo "************* NOW USE ****************"
ls -lrta $3.*
echo "**************************************"
echo ""
cd ${rundir}
pwd
File added
File added
File added
This diff is collapsed.
"""CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
updates of the code. See also: http://www.carbontracker.eu.
This program is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software Foundation,
version 3. This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this
program. If not, see <http://www.gnu.org/licenses/>."""
#!/usr/bin/env python
# expand_fluxes.py
import sys
import os
import getopt
import shutil
import logging
import netCDF4
import numpy as np
from datetime import datetime, timedelta
sys.path.append('../../')
from da.tools.general import date2num, num2date
from da.tools.general import create_dirs
import da.tools.io4 as io
"""
Author: Wouter Peters (Wouter.Peters@wur.nl)
Revision History:
File created on 11 May 2012.
"""
def write_mole_fractions(dacycle):
"""
Write Sample information to NetCDF files. These files are organized by site and
have an unlimited time axis to which data is appended each cycle.
The needed information is obtained from the sample_auxiliary.nc files and the original input data files from ObsPack.
The steps are:
(1) Create a directory to hold timeseries output files
(2) Read the sample_auxiliary.nc file for this cycle and get a list of original files they were obtained from
(3) For each file, copy the original data file from ObsPack (if not yet present)
(4) Open the copied file, find the index of each observation, fill in the simulated data
"""
dirname = create_dirs(os.path.join(dacycle['dir.analysis'], 'data_molefractions'))
#
# Some help variables
#
dt = dacycle['cyclelength']
startdate = dacycle['time.start']
enddate = dacycle['time.end']
logging.debug("DA Cycle start date is %s" % startdate.strftime('%Y-%m-%d %H:%M'))
logging.debug("DA Cycle end date is %s" % enddate.strftime('%Y-%m-%d %H:%M'))
dacycle['time.sample.stamp'] = "%s_%s" % (startdate.strftime("%Y%m%d%H"), enddate.strftime("%Y%m%d%H"),)
# Step (1): Get the posterior sample output data file for this cycle
infile = os.path.join(dacycle['dir.output'], 'sample_auxiliary_%s.nc' % dacycle['time.sample.stamp'])
ncf_in = io.ct_read(infile, 'read')
if len(ncf_in.dimensions['obs']) == 0:
ncf_in.close()
return None
obs_num = ncf_in.get_variable('obs_num')
obs_val = ncf_in.get_variable('observed')
simulated = ncf_in.get_variable('modelsamples')
infilename = ncf_in.get_variable('inputfilename')
infiles1 = netCDF4.chartostring(infilename).tolist()
# In case of reanalysis on different platform, obspack-input-directory might have a different name.
# This is checked here, and the filenames are corrected
dir_from_rc = dacycle.dasystem['obspack.input.dir']
dir_from_output = infiles1[0]
d1 = dir_from_rc[:dir_from_rc.find('obspacks')]
d2 = dir_from_output[:dir_from_output.find('obspacks')]
if d1 == d2:
infiles = infiles1
else:
infiles = []
for ff in infiles1:
infiles.append(ff.replace(d2,d1))
ncf_in.close()
# Step (2): Get the prior sample output data file for this cycle
infile = os.path.join(dacycle['dir.output'], 'optimizer.%s.nc' % startdate.strftime('%Y%m%d'))
if os.path.exists(infile):
optimized_present = True
else:
optimized_present = False
if optimized_present:
ncf_fc_in = io.ct_read(infile, 'read')
fc_obs_num = ncf_fc_in.get_variable('obspack_num')
fc_obs_val = ncf_fc_in.get_variable('observed')
fc_simulated = ncf_fc_in.get_variable('modelsamplesmean_prior')
fc_simulated_ens = ncf_fc_in.get_variable('modelsamplesdeviations_prior')
fc_flag = ncf_fc_in.get_variable('flag')
if 'modeldatamismatchvariance' not in dacycle.dasystem:
fc_r = ncf_fc_in.get_variable('modeldatamismatchvariance')
fc_hphtr = ncf_fc_in.get_variable('totalmolefractionvariance')
elif dacycle.dasystem['opt.algorithm'] == 'serial':
fc_r = ncf_fc_in.get_variable('modeldatamismatchvariance')
fc_hphtr = ncf_fc_in.get_variable('totalmolefractionvariance')
elif dacycle.dasystem['opt.algorithm'] == 'bulk':
fc_r = ncf_fc_in.get_variable('modeldatamismatchvariance').diagonal()
fc_hphtr = ncf_fc_in.get_variable('totalmolefractionvariance').diagonal()
filesitecode = ncf_fc_in.get_variable('sitecode')
fc_sitecodes = netCDF4.chartostring(filesitecode).tolist()
ncf_fc_in.close()
# Expand the list of input files with those available from the forecast list
infiles_rootdir = os.path.split(infiles[0])[0]
infiles.extend(os.path.join(infiles_rootdir, f + '.nc') for f in fc_sitecodes)
#Step (2): For each observation timeseries we now have data for, open it and fill with data
for orig_file in set(infiles):
if not os.path.exists(orig_file):
logging.error("The original input file (%s) could not be found, continuing to next file..." % orig_file)
continue
copy_file = os.path.join(dirname, os.path.split(orig_file)[-1])
if not os.path.exists(copy_file):
shutil.copy(orig_file, copy_file)
logging.debug("Copied a new original file (%s) to the analysis directory" % orig_file)
ncf_out = io.CT_CDF(copy_file, 'write')
# Modify the attributes of the file to reflect added data from CTDAS properly
try:
host=os.environ['HOSTNAME']
except:
host='unknown'
ncf_out.Caution = '==================================================================================='
try:
ncf_out.History += '\nOriginal observation file modified by user %s on %s\n' % (os.environ['USER'], datetime.today().strftime('%F'),)
except:
ncf_out.History = '\nOriginal observation file modified by user %s on %s\n' % (os.environ['USER'], datetime.today().strftime('%F'),)
ncf_out.CTDAS_info = 'Simulated values added from a CTDAS run by %s on %s\n' % (os.environ['USER'], datetime.today().strftime('%F'),)\
+ '\nCTDAS was run on platform %s' % (host,)\
+ '\nCTDAS job directory was %s' % (dacycle['dir.da_run'],)\
+ '\nCTDAS Da System was %s' % (dacycle['da.system'],)\
+ '\nCTDAS Da ObsOperator was %s' % (dacycle['da.obsoperator'],)
ncf_out.CTDAS_startdate = dacycle['time.start'].strftime('%F')
ncf_out.CTDAS_enddate = dacycle['time.finish'].strftime("%F")
ncf_out.original_file = orig_file
# get nobs dimension
if 'id' in ncf_out.dimensions:
dimidob = ncf_out.dimensions['id']
dimid = ('id',)
elif 'obs' in ncf_out.dimensions:
dimidob = ncf_out.dimensions['obs']
dimid = ('obs',)
if dimidob.isunlimited:
nobs = ncf_out.inq_unlimlen()
else:
nobs = len(dimid)
# add nmembers dimension
dimmembersob = ncf_out.createDimension('nmembers', size=simulated.shape[1])
dimmembers = ('nmembers',)
nmembers = len(dimmembers)
# Create empty arrays for posterior samples, as well as for forecast sample statistics
savedict = io.std_savedict.copy()
savedict['name'] = "flag_forecast"
savedict['long_name'] = "flag_for_obs_model in forecast"
savedict['units'] = "None"
savedict['dims'] = dimid
savedict['comment'] = 'Flag (0/1/2/99) for observation value, 0 means okay, 1 means QC error, 2 means rejected, 99 means not sampled'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modeldatamismatch"
savedict['long_name'] = "modeldatamismatch"
savedict['units'] = "[mol mol-1]^2"
savedict['dims'] = dimid
savedict['comment'] = 'Variance of mole fractions resulting from model-data mismatch'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "totalmolefractionvariance_forecast"
savedict['long_name'] = "totalmolefractionvariance of forecast"
savedict['units'] = "[mol mol-1]^2"
savedict['dims'] = dimid
savedict['comment'] = 'Variance of mole fractions resulting from prior state and model-data mismatch'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesmean"
savedict['long_name'] = "mean modelsamples"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid
savedict['comment'] = 'simulated mole fractions based on optimized state vector'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesmean_forecast"
savedict['long_name'] = "mean modelsamples from forecast"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid
savedict['comment'] = 'simulated mole fractions based on prior state vector'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesstandarddeviation"
savedict['long_name'] = "standard deviaton of modelsamples over all ensemble members"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid
savedict['comment'] = 'std dev of simulated mole fractions based on optimized state vector'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesstandarddeviation_forecast"
savedict['long_name'] = "standard deviaton of modelsamples from forecast over all ensemble members"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid
savedict['comment'] = 'std dev of simulated mole fractions based on prior state vector'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesensemble"
savedict['long_name'] = "modelsamples over all ensemble members"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid + dimmembers
savedict['comment'] = 'ensemble of simulated mole fractions based on optimized state vector'
ncf_out.add_variable(savedict)
savedict = io.std_savedict.copy()
savedict['name'] = "modelsamplesensemble_forecast"
savedict['long_name'] = "modelsamples from forecast over all ensemble members"
savedict['units'] = "mol mol-1"
savedict['dims'] = dimid + dimmembers
savedict['comment'] = 'ensemble of simulated mole fractions based on prior state vector'
ncf_out.add_variable(savedict)
else:
logging.debug("Modifying existing file (%s) in the analysis directory" % copy_file)
ncf_out = io.CT_CDF(copy_file, 'write')
# Get existing file obs_nums to determine match to local obs_nums
if 'merge_num' in ncf_out.variables:
file_obs_nums = ncf_out.get_variable('merge_num')
elif 'obspack_num' in ncf_out.variables:
file_obs_nums = ncf_out.get_variable('obspack_num')
elif 'id' in ncf_out.variables:
file_obs_nums = ncf_out.get_variable('id')
# Get all obs_nums related to this file, determine their indices in the local arrays
selected_obs_nums = [num for infile, num in zip(infiles, obs_num) if infile == orig_file]
# Optimized data 1st: For each index, get the data and add to the file in the proper file index location
for num in selected_obs_nums:
model_index = np.where(obs_num == num)[0][0]
file_index = np.where(file_obs_nums == num)[0][0]
var = ncf_out.variables['modelsamplesmean']
var[file_index] = simulated[model_index, 0]
var = ncf_out.variables['modelsamplesstandarddeviation']
var[file_index] = simulated[model_index, 1:].std()
var = ncf_out.variables['modelsamplesensemble']
var[file_index] = simulated[model_index, :]
# Now forecast data too: For each index, get the data and add to the file in the proper file index location
if optimized_present:
selected_fc_obs_nums = [num for sitecode, num in zip(fc_sitecodes, fc_obs_num) if sitecode in orig_file]
for num in selected_fc_obs_nums:
model_index = np.where(fc_obs_num == num)[0][0]
file_index = np.where(file_obs_nums == num)[0][0]
var = ncf_out.variables['modeldatamismatch']
var[file_index] = np.sqrt(fc_r[model_index])
var = ncf_out.variables['modelsamplesmean_forecast']
var[file_index] = fc_simulated[model_index]
var = ncf_out.variables['modelsamplesstandarddeviation_forecast']
var[file_index] = fc_simulated_ens[model_index, 1:].std()
var = ncf_out.variables['modelsamplesensemble_forecast']
var[file_index] = fc_simulated_ens[model_index, :]
var = ncf_out.variables['totalmolefractionvariance_forecast']
var[file_index] = fc_hphtr[model_index]
var = ncf_out.variables['flag_forecast']
var[file_index] = fc_flag[model_index]
# close the file
status = ncf_out.close()
return None
if __name__ == "__main__":
import logging
from da.tools.initexit import CycleControl
from da.carbondioxide.dasystem import CO2DaSystem
sys.path.append('../../')
logging.root.setLevel(logging.DEBUG)
dacycle = CycleControl(args={'rc':'../../ctdas-od-gfed2-glb6x4-obspack-full.rc'})
dasystem = CO2DaSystem('../rc/carbontracker_ct09_opfnew.rc')
dacycle.dasystem = dasystem
dacycle.setup()
dacycle.parse_times()
while dacycle['time.start'] < dacycle['time.finish']:
outfile = write_mole_fractions(dacycle)
dacycle.advance_cycle_times()
sys.exit(0)
"""CarbonTracker Data Assimilation Shell (CTDAS) Copyright (C) 2017 Wouter Peters.
Users are recommended to contact the developers (wouter.peters@wur.nl) to receive
updates of the code. See also: http://www.carbontracker.eu.
This program is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software Foundation,
version 3. This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this
program. If not, see <http://www.gnu.org/licenses/>."""
#!/usr/bin/env python
# merge_ctdas_runs.py
"""
Author : peters
Revision History:
File created on 14 Jul 2014.
This scrip merges the analysis directory from multiple projects into one new folder.
It steps over existing analysis output files from weekly means, and then averages these to daily/monthy/yearly values.
"""
import datetime as dt
import os
import sys
import shutil
from . import time_avg_fluxes as tma
basedir = '/Storage/CO2/ingrid/'
basedir2 = '/Storage/CO2/peters/'
targetproject = 'geocarbon-ei-sibcasa-gfed4-zoom-gridded-combined-convec-20011230-20130101'
targetdir = os.path.join(basedir2,targetproject)
sources = {
'2000-01-01 through 2011-12-31': os.path.join(basedir,'carbontracker','geocarbon-ei-sibcasa-gfed4-zoom-gridded-convec-combined'),
'2012-01-01 through 2012-12-31': os.path.join(basedir2,'geocarbon-ei-sibcasa-gfed4-zoom-gridded-convec-20111231-20140101'),
}
dirs = ['flux1x1','transcom','country','olson']
dacycle = {}
dacycle['time.start'] = dt.datetime(2000,12,30)
dacycle['time.end'] = dt.datetime(2013,1,1)
dacycle['cyclelength'] = dt.timedelta(days=7)
dacycle['dir.analysis'] = os.path.join(targetdir,'analysis')
if __name__ == "__main__":
if not os.path.exists(targetdir):
os.makedirs(targetdir)
if not os.path.exists(os.path.join(targetdir,'analysis')):
os.makedirs(os.path.join(targetdir,'analysis') )
for nam in dirs:
if not os.path.exists(os.path.join(targetdir,'analysis','data_%s_weekly'%nam)):
os.makedirs(os.path.join(targetdir,'analysis','data_%s_weekly'%nam) )
timedirs=[]
for ss,vv in sources.items():
sds,eds = ss.split(' through ')
sd = dt.datetime.strptime(sds,'%Y-%m-%d')
ed = dt.datetime.strptime(eds,'%Y-%m-%d')
timedirs.append([sd,ed,vv])
print(sd,ed, vv)
while dacycle['time.start'] < dacycle['time.end']:
# copy the weekly flux1x1 file from the original dir to the new project dir
for td in timedirs:
if dacycle['time.start'] >= td[0] and dacycle['time.start'] <= td[1]:
indir=td[2]
# Now time avg new fluxes
infile = os.path.join(indir,'analysis','data_flux1x1_weekly','flux_1x1.%s.nc'%(dacycle['time.start'].strftime('%Y-%m-%d') ) )
#print os.path.exists(infile),infile
shutil.copy(infile,infile.replace(indir,targetdir) )
tma.time_avg(dacycle,avg='flux1x1')
infile = os.path.join(indir,'analysis','data_transcom_weekly','transcom_fluxes.%s.nc'%(dacycle['time.start'].strftime('%Y-%m-%d') ) )
#print os.path.exists(infile),infile
shutil.copy(infile,infile.replace(indir,targetdir) )
tma.time_avg(dacycle,avg='transcom')
infile = os.path.join(indir,'analysis','data_olson_weekly','olson_fluxes.%s.nc'%(dacycle['time.start'].strftime('%Y-%m-%d') ) )
#print os.path.exists(infile),infile
shutil.copy(infile,infile.replace(indir,targetdir) )
tma.time_avg(dacycle,avg='olson')
infile = os.path.join(indir,'analysis','data_country_weekly','country_fluxes.%s.nc'%(dacycle['time.start'].strftime('%Y-%m-%d') ) )
#print os.path.exists(infile),infile
shutil.copy(infile,infile.replace(indir,targetdir) )
tma.time_avg(dacycle,avg='country')
dacycle['time.start'] += dacycle['cyclelength']
File added
No preview for this file type
File added
This diff is collapsed.