Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • c13_devel
  • coco2
  • ctdas-cosmo
  • ctdas-icon
  • ctdas-wrf
  • ctdas-wrf-in-situ
  • ctdas-wrf-insitu_uoc
  • ctdas-wrf-simple-cmds
  • cte-lagrange
  • ctecc
  • ctecc_europe
  • ctelw_europe
  • ctsf
  • ctsf_sib_mul-add
  • dev-sw-oco2
  • dynamicFFmodel
  • feature-STILT
  • gcp
  • ivar
  • liesbeth
  • master
  • master_cleanup
  • merge_ctecc_master
  • near-real-time
  • new_da_structure
  • old-master
  • old-structure
  • proj-biomass
  • proj-ctecc_hr
  • py-3
  • random_obsop_cte
  • remco
  • remotec
  • tm5mp
  • trunk
35 results

Target

Select target project
  • nearrealtimectdas/CTDAS
  • tsurata_a/CTDAS
  • peter050/CTDAS
  • woude033/CTDAS
  • flore019/CTDAS
  • laan035/CTDAS
  • koren007/CTDAS
  • smith036/CTDAS
  • ctdas/CTDAS
9 results
Select Git revision
  • classes
  • ctdas-cosmo
  • feature-STILT
  • griddedstatevector
  • ivar
  • karolina
  • liesbeth
  • master
  • package
  • py-3
  • remotec
  • trunk
  • wouter
13 results
Show changes
Commits on Source (91)
Showing
with 197 additions and 57 deletions
# Exclude the files created from the templates:
templates/*
# Exclude compiled files
da/**/__pycache__/
da/**/*.pyc
da/**/*.bak
Dear CTDAS users,
We hope you’re all doing well! We would like to inform you that we have restructured our CTDAS code base on GIT.
This new structure is completely in python3 and follows the setup of the 7 classes defined in the 2017 GMD paper.
So e.g. instead of having a project-related folders like “TM5” or “STILT”, we now have folders for obsoperators and statevectors.
In the current master code on GIT, we have included baseclasses and the code for the standard CarbonTracker Europe CO2 version (labeled cteco2).
Other previous projects have been moved to a temporary “archive” folder,
from which each of you can take your project specific code and move it into the new structure.
We would like to encourage all users to make use of the GITlab repository as much as possible,
so that we can all benefit from each other’s work there.
Note that moving your code to the new structure is very little work,
it is a matter of moving files to the directories and changing the path for the import of the functions.
We have included further information below on the new structure and how to work with it on GIT.
Please take a look, and let us know in case you have further questions.
We hope that with this step, we can enhance the collaboration between all of us on this code base,
and we will therefore also send you an invitation to our slack channel code-ctdas,
so that we can keep in touch there with CTDAS code related questions.
Kind regards,
Ingrid, Wouter and Auke
===================================================
The new structure
The code has been updated to python3, and the structure is as follows:
The 7 components of the CTDAS system each have their own folder within the da folder.
- cyclecontrol - dasystems
- platform
- statevectors - observations - obsoperators - optimizers
Each of the 7 main folders contains the cteco2 specific code, and in some cases, there is a baseclass module from which project specific code can inherit the properties (e.g. in optimizers). Further folders are:
- pipelines (this contains the pipelines that are called in the main ctdas.py script, currently implemented for the cteco2 setup)
- analysis (here we have standard analysis tools, plus a subfolder for cteco2)
- tools
- preprocessing
- rc
- doc
- archive (this contains all code that was previously in project-folders, this can be
moved into the main new code, further details below).
Furthermore, in the main CTDAS folder, there are the start_ctdas and clone_ctdas shell scripts, and the main ctdas.py, ctdas.jb and ctdas.rc files in the templates folder.
Your project
In case you already had your project specific CTDAS code on GIT, please read this section on how to update to the new structure. New (CTDAS and/or GIT) users can read the sections below. Note that we prefer users to work on branches in the main code, rather than on independent forks, so that we can profit for each other’s developments.
Please follow these steps (two examples below for fork or branch):
1. Create a new temporary branch to make your project-specific code in the new structure
2. Get your changes from your original project-specific branch/fork into the new temporary structure branch
3. Merge request from temporary branch to the master
You can check if you are on the main code or a fork using:
`git remote -v`
In case you are on the main code, you will get something like:
`origin https://git.wur.nl/ctdas/CTDAS.git`
In case you are on a fork, you will get something like:
`origin https://git.wur.nl/woude033/CTDAS.git (fetch)`
`origin https://git.wur.nl/woude033/CTDAS.git (push)`
`upstream https://git.wur.nl/ctdas/CTDAS.git (fetch)`
`upstream https://git.wur.nl/ctdas/CTDAS.git (push)`
You can check on which branch you are as follows:
`git branch`
The output will look like:
` master`
` * feature-STILT`
Example: if you are working on a fork, start at step 0, else, if you are working in a branch of the main code (https://git.wur.nl/ctdas/CTDAS.git) e.g. called feature-STILT start at step 1.
0. In case you are working on a fork:
Make a new folder locally and get a new clone from the master of the main code:
`git clone https://git.wur.nl/ctdas/CTDAS`
1. Make a new temporary branch on the main code:
`git checkout -b temp_stilt_new_structure remotes/origin/master`
2. If you were working in a fork (else skip to step 3): merge your fork as a new temporary branch in the main code.
Go to https://git.wur.nl/ctdas/CTDAS and go to ‘merge requests’, and ‘click new merge request’. On the left, fill in your fork and on the right the main CTDAS code and your new temporary branch temp_stilt_new_structure.
3. Copy a file from your previous branch to the new temporary branch:
`git checkout feature-STILT stilt/optimizer.py`
4. Move this file to the correct place in the new structure:
`git mv stilt/optimizer.py optimizers/optimizer_stilt.py`
5. Make sure everything is working with this new code, and then:
`git add .`
`git commit -m “Added STILT optimizer” git push`
6. After you finish to implement and test all your changes: Go to https://git.wur.nl/ctdas/CTDAS and go to ‘merge requests’, and ‘click new merge request’.
====================================================
#### Some useful GIT commands:
1. Check if you’re on the master branch:
`git branch`
2. If you’re not on the master branch, switch to it:
`git checkout master`
3. Switching to another existing branch:
`git checkout feature-STILT`
4. If you want to compare your code to the master:
`git fetch`
`git status`
5. To get the changes from the remote master:
`git pull`
6. Moving a file:
`git mv stilt/optimizer.py optimizers/optimizer_stilt.py`
7. Committing changes:
`git add .`
`git commit -m “A message” git push`
8. If you want to replace your local master code with the remote master code, follow
these steps (warning! Your previous version will be lost!): git fetch origin
`git reset --hard origin/master`
9. Get changes to the master code in your branch:
`git merge origin/master`
#### Experienced CTDAS users, but new to GIT:
Follow the first steps for new users (see below) to get your copy of the current CTDAS code from GIT in a new folder. Then compare the changes in your local code and merge your changes into the new structure (see description above) in your temporary branch. It might happen that you want to create a copy of an existing file on GIT, that you adapt to your project. There is no git cp command, so to create a copy with GIT history, follow these
steps:
`cp optimizers/optimizer_cteco2.py optimizers/optimizer_ctesamco2.py git add optimizers/optimizer_ctesamco2.py`
`git commit -m “Added CTE South America CO2 optimizer”`
`git push`
##### Python3
To update your code to python3, the tool 2to3 is very convenient. See for more details:
https://docs.python.org/3/library/2to3.html
New CTDAS users
Make a folder, then:
`git clone https://git.wur.nl/ctdas/CTDAS`
Make your temporary branch to develop your code:
`git checkout -b temp_stilt_new_structure remotes/origin/master`
(Later on, you can do a merge request to merge your changes into the master branch).
The main code is found in the da folder. You will find subfolders for each of the components
of the CTDAS system (see information above about the structure). Furthermore, we have a
script to start a new run, as follows:
`./start_ctdas.sh /somedirectory/ name_of_your_run`
This will create a new folder for your run, and makes a copy of the da folder. Furthermore, the main ctdas.py will be created, named name_of_your_run.py, along with the main jb and rc file. These are based on the templates in the template folder. In this way, you will always have the current version of the code with your run. For further information, please refer to: https://www.carbontracker.eu/ctdas/
......@@ -70,7 +70,7 @@ fi
mkdir -p ${rundir}
rsync -ru --cvs-exclude --exclude=*nc ${sourcedir}/* ${rundir}/
rsync -ru --cvs-exclude ${sourcedir}/da/analysis/*nc ${rundir}/da/analysis/
rsync -ru --cvs-exclude ${sourcedir}/da/analysis/cteco2/*nc ${rundir}/da/analysis/cteco2
cd ${rundir}
echo "Modifying jb file, py file, and rc-file"
......
File deleted
......@@ -16,22 +16,22 @@ import sys
import os
sys.path.append('../../')
rootdir = os.getcwd().split('da/')[0]
analysisdir = os.path.join(rootdir, 'da/analysis')
analysisdir = os.path.join(rootdir, 'da/analysis/cteco2/')
from datetime import datetime, timedelta
import logging
import numpy as np
from da.tools.general import date2num, num2date
import da.tools.io4 as io
from da.analysis.tools_regions import globarea, state_to_grid
from da.analysis.cteco2.tools_regions import globarea, state_to_grid
from da.tools.general import create_dirs
from da.analysis.tools_country import countryinfo # needed here
from da.analysis.tools_transcom import transcommask, ExtendedTCRegions
from da.analysis.cteco2.tools_country import countryinfo # needed here
from da.analysis.cteco2.tools_transcom import transcommask, ExtendedTCRegions
import netCDF4 as cdf
import da.analysis.tools_transcom as tc
import da.analysis.tools_country as ct
import da.analysis.tools_time as timetools
import da.analysis.cteco2.tools_transcom as tc
import da.analysis.cteco2.tools_country as ct
import da.analysis.cteco2.tools_time as timetools
......@@ -45,7 +45,7 @@ File created on 21 Ocotber 2008.
def proceed_dialog(txt, yes=['y', 'yes'], all=['a', 'all', 'yes-to-all']):
""" function to ask whether to proceed or not """
response = raw_input(txt)
response = input(txt)
if response.lower() in yes:
return 1
if response.lower() in all:
......@@ -113,7 +113,11 @@ def save_weekly_avg_1x1_data(dacycle, statevector):
fire = np.array(file.get_variable(dacycle.dasystem['background.co2.fires.flux']))
fossil = np.array(file.get_variable(dacycle.dasystem['background.co2.fossil.flux']))
#mapped_parameters = np.array(file.get_variable(dacycle.dasystem['final.param.mean.1x1']))
if dacycle.dasystem['background.co2.biosam.flux'] in file.variables.keys():
if dacycle.dasystem['background.co2.bio.lt.flux'] in list(file.variables.keys()):
lt_flux = True
bio_lt = np.array(file.get_variable(dacycle.dasystem['background.co2.bio.lt.flux']))
else: lt_flux = False
if dacycle.dasystem['background.co2.biosam.flux'] in list(file.variables.keys()):
sam = True
biosam = np.array(file.get_variable(dacycle.dasystem['background.co2.biosam.flux']))
firesam = np.array(file.get_variable(dacycle.dasystem['background.co2.firesam.flux']))
......@@ -162,10 +166,13 @@ def save_weekly_avg_1x1_data(dacycle, statevector):
#
# if prior, do not multiply fluxes with parameters, otherwise do
#
print gridensemble.shape, bio.shape, gridmean.shape
biomapped = bio * gridmean
oceanmapped = ocean * gridmean
biovarmapped = bio * gridensemble
if lt_flux:
biomapped = bio_lt + bio * gridmean
biovarmapped = bio_lt + bio * gridensemble
else:
biomapped = bio * gridmean
biovarmapped = bio * gridensemble
oceanmapped = ocean * gridmean
oceanvarmapped = ocean * gridensemble
#
......@@ -184,7 +191,7 @@ def save_weekly_avg_1x1_data(dacycle, statevector):
savedict['count'] = next
ncf.add_data(savedict)
print biovarmapped.shape
print(biovarmapped.shape)
savedict = ncf.standard_var(varname='bio_flux_%s_ensemble' % qual_short)
savedict['values'] = biovarmapped.tolist()
savedict['dims'] = dimdate + dimensemble + dimgrid
......@@ -301,7 +308,7 @@ def save_weekly_avg_state_data(dacycle, statevector):
fire = np.array(file.get_variable(dacycle.dasystem['background.co2.fires.flux']))
fossil = np.array(file.get_variable(dacycle.dasystem['background.co2.fossil.flux']))
#mapped_parameters = np.array(file.get_variable(dacycle.dasystem['final.param.mean.1x1']))
if dacycle.dasystem['background.co2.biosam.flux'] in file.variables.keys():
if dacycle.dasystem['background.co2.biosam.flux'] in list(file.variables.keys()):
sam = True
biosam = np.array(file.get_variable(dacycle.dasystem['background.co2.biosam.flux']))
firesam = np.array(file.get_variable(dacycle.dasystem['background.co2.firesam.flux']))
......@@ -555,7 +562,7 @@ def save_weekly_avg_tc_data(dacycle, statevector):
# Now convert other variables that were inside the flux_1x1 file
vardict = ncf_in.variables
for vname, vprop in vardict.iteritems():
for vname, vprop in vardict.items():
data = ncf_in.get_variable(vname)[index]
......@@ -680,7 +687,7 @@ def save_weekly_avg_ext_tc_data(dacycle):
# Now convert other variables that were inside the tcfluxes.nc file
vardict = ncf_in.variables
for vname, vprop in vardict.iteritems():
for vname, vprop in vardict.items():
data = ncf_in.get_variable(vname)[index]
......@@ -899,7 +906,7 @@ def save_weekly_avg_agg_data(dacycle, region_aggregate='olson'):
# Now convert other variables that were inside the statevector file
vardict = ncf_in.variables
for vname, vprop in vardict.iteritems():
for vname, vprop in vardict.items():
if vname == 'latitude': continue
elif vname == 'longitude': continue
elif vname == 'date': continue
......@@ -1014,7 +1021,7 @@ def save_time_avg_data(dacycle, infile, avg='monthly'):
pass
file = io.ct_read(infile, 'read')
datasets = file.variables.keys()
datasets = list(file.variables.keys())
date = file.get_variable('date')
globatts = file.ncattrs()
......@@ -1042,7 +1049,7 @@ def save_time_avg_data(dacycle, infile, avg='monthly'):
for d in vardims:
if 'date' in d:
continue
if d in ncf.dimensions.keys():
if d in list(ncf.dimensions.keys()):
pass
else:
dim = ncf.createDimension(d, size=len(file.dimensions[d]))
......@@ -1072,7 +1079,7 @@ def save_time_avg_data(dacycle, infile, avg='monthly'):
time_avg = [time_avg]
data_avg = [data_avg]
else:
raise ValueError, 'Averaging (%s) does not exist' % avg
raise ValueError('Averaging (%s) does not exist' % avg)
count = -1
for dd, data in zip(time_avg, data_avg):
......
......@@ -19,7 +19,6 @@ import shutil
import logging
import netCDF4
import numpy as np
from string import join
from datetime import datetime, timedelta
sys.path.append('../../')
from da.tools.general import date2num, num2date
......@@ -71,6 +70,10 @@ def write_mole_fractions(dacycle):
ncf_in = io.ct_read(infile, 'read')
if len(ncf_in.dimensions['obs']) == 0:
ncf_in.close()
return None
obs_num = ncf_in.get_variable('obs_num')
obs_val = ncf_in.get_variable('observed')
simulated = ncf_in.get_variable('modelsamples')
......@@ -148,11 +151,10 @@ def write_mole_fractions(dacycle):
ncf_out = io.CT_CDF(copy_file, 'write')
# Modify the attributes of the file to reflect added data from CTDAS properly
try:
host=os.environ['HOSTNAME']
except:
host='unknown'
try:
host=os.environ['HOSTNAME']
except:
host='unknown'
ncf_out.Caution = '==================================================================================='
......
......@@ -28,7 +28,7 @@ import datetime as dt
import os
import sys
import shutil
import time_avg_fluxes as tma
from . import time_avg_fluxes as tma
basedir = '/Storage/CO2/ingrid/'
basedir2 = '/Storage/CO2/peters/'
......@@ -60,12 +60,12 @@ if __name__ == "__main__":
os.makedirs(os.path.join(targetdir,'analysis','data_%s_weekly'%nam) )
timedirs=[]
for ss,vv in sources.iteritems():
for ss,vv in sources.items():
sds,eds = ss.split(' through ')
sd = dt.datetime.strptime(sds,'%Y-%m-%d')
ed = dt.datetime.strptime(eds,'%Y-%m-%d')
timedirs.append([sd,ed,vv])
print sd,ed, vv
print(sd,ed, vv)
while dacycle['time.start'] < dacycle['time.end']:
......
File moved
......@@ -38,10 +38,10 @@ import datetime as dt
import da.tools.io4 as io
import logging
import copy
from da.analysis.summarize_obs import nice_lon, nice_lat, nice_alt
from da.analysis.cteco2.summarize_obs import nice_lon, nice_lat, nice_alt
from PIL import Image
import urllib2
import StringIO
import urllib.request, urllib.error, urllib.parse
import io
"""
General data needed to set up proper aces inside a figure instance
......@@ -336,7 +336,7 @@ def timehistograms_new(fig, infile, option='final'):
# Get a scaling factor for the x-axis range. Now we will include 5 standard deviations
sc = res.std()
print 'sc',sc
print('sc',sc)
# If there is too little data for a reasonable PDF, skip to the next value in the loop
if res.shape[0] < 10: continue
......@@ -435,12 +435,12 @@ def timehistograms_new(fig, infile, option='final'):
#fig.text(0.12,0.16,str1,fontsize=0.8*fontsize,color='0.75')
try:
img = urllib2.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
img = urllib.request.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
except:
logging.warning("No logo found for this program, continuing...")
return fig
im = Image.open(StringIO.StringIO(img))
im = Image.open(io.StringIO(img))
height = im.size[1]
width = im.size[0]
......@@ -674,12 +674,12 @@ def timevssite_new(fig, infile):
#fig.text(0.12, 0.16, str1, fontsize=0.8 * fontsize, color='0.75')
try:
img = urllib2.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
img = urllib.request.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
except:
logging.warning("No logo found for this program, continuing...")
return fig
im = Image.open(StringIO.StringIO(img))
im = Image.open(io.StringIO(img))
height = im.size[1]
width = im.size[0]
......@@ -933,12 +933,12 @@ def residuals_new(fig, infile, option):
#fig.text(0.12, 0.16, str1, fontsize=0.8 * fontsize, color='0.75')
try:
img = urllib2.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
img = urllib.request.urlopen('http://www.esrl.noaa.gov/gmd/webdata/ccgg/ObsPack/images/logos/'+SDSInfo['lab_1_logo']).read()
except:
logging.warning("No logo found for this program, continuing...")
return fig
im = Image.open(StringIO.StringIO(img))
im = Image.open(io.StringIO(img))
height = im.size[1]
width = im.size[0]
......
......@@ -15,7 +15,7 @@ import sys
sys.path.append('../../')
import os
import numpy as np
import string
#import string
import datetime as dt
import logging
import re
......@@ -36,9 +36,9 @@ def nice_lat(cls,format='html'):
#return string.strip('%2d %2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
if format == 'python':
return string.strip('%3d$^\circ$%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
return ('%3d$^\circ$%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h)).strip()
if format == 'html':
return string.strip('%3d&deg%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
return ('%3d&deg%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h)).strip()
def nice_lon(cls,format='html'):
#
......@@ -53,16 +53,16 @@ def nice_lon(cls,format='html'):
#return string.strip('%3d %2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
if format == 'python':
return string.strip('%3d$^\circ$%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
return ('%3d$^\circ$%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h)).strip()
if format == 'html':
return string.strip('%3d&deg%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h))
return ('%3d&deg%2d\'%s' % (abs(deg), round(abs(60 * dec), 0), h)).strip()
def nice_alt(cls):
#
# Reformat elevation or altitude
#
#return string.strip('%10.1f masl' % round(cls, -1))
return string.strip('%i masl' %cls)
return ('%i masl' %cls).strip()
def summarize_obs(analysisdir, printfmt='html'):
......@@ -97,9 +97,9 @@ def summarize_obs(analysisdir, printfmt='html'):
infiles = [os.path.join(mrdir, f) for f in mrfiles if f.endswith('.nc')]
if printfmt == 'tex':
print '\\begin{tabular*}{\\textheight}{l l l l r r r r}'
print 'Code & Name & Lat, Lon, Elev & Lab & N (flagged) & $\\sqrt{R}$ &Inn \\XS &Bias\\\\'
print '\hline\\\\ \n\multicolumn{8}{ c }{Semi-Continuous Surface Samples}\\\\[3pt] '
print('\\begin{tabular*}{\\textheight}{l l l l r r r r}')
print('Code & Name & Lat, Lon, Elev & Lab & N (flagged) & $\\sqrt{R}$ &Inn \\XS &Bias\\\\')
print('\hline\\\\ \n\multicolumn{8}{ c }{Semi-Continuous Surface Samples}\\\\[3pt] ')
fmt = '%8s & ' + ' %55s & ' + '%20s &' + '%6s &' + ' %4d (%d) & ' + ' %5.2f & ' + ' %5.2f & ' + '%+5.2f \\\\'
elif printfmt == 'html':
tablehead = \
......@@ -136,7 +136,7 @@ def summarize_obs(analysisdir, printfmt='html'):
<TD>%s</TD>\n \
</TR>\n"""
elif printfmt == 'scr':
print 'Code Site NObs flagged R Inn X2'
print('Code Site NObs flagged R Inn X2')
fmt = '%8s ' + ' %55s %s %s' + ' %4d ' + ' %4d ' + ' %5.2f ' + ' %5.2f'
table = []
......@@ -282,7 +282,6 @@ def make_map(analysisdir): #makes a map of amount of assimilated observations pe
import netCDF4 as cdf
import matplotlib.pyplot as plt
import matplotlib
from maptools import *
from matplotlib.font_manager import FontProperties
sumdir = os.path.join(analysisdir, 'summary')
......@@ -363,15 +362,15 @@ def make_map(analysisdir): #makes a map of amount of assimilated observations pe
ax.annotate(labs[i],xy=m(172,86-count),xycoords='data',fontweight='bold')
count = count + 4
fig.text(0.15,0.945,u'\u2022',fontsize=35,color='blue')
fig.text(0.15,0.945,'\u2022',fontsize=35,color='blue')
fig.text(0.16,0.95,': N<250',fontsize=24,color='blue')
fig.text(0.30,0.94,u'\u2022',fontsize=40,color='green')
fig.text(0.30,0.94,'\u2022',fontsize=40,color='green')
fig.text(0.31,0.95,': N<500',fontsize=24,color='green')
fig.text(0.45,0.94,u'\u2022',fontsize=45,color='orange')
fig.text(0.45,0.94,'\u2022',fontsize=45,color='orange')
fig.text(0.46,0.95,': N<750',fontsize=24,color='orange')
fig.text(0.60,0.939,u'\u2022',fontsize=50,color='brown')
fig.text(0.60,0.939,'\u2022',fontsize=50,color='brown')
fig.text(0.61,0.95,': N<1000',fontsize=24,color='brown')
fig.text(0.75,0.938,u'\u2022',fontsize=55,color='red')
fig.text(0.75,0.938,'\u2022',fontsize=55,color='red')
fig.text(0.765,0.95,': N>1000',fontsize=24,color='red')
ax.set_title('Assimilated observations',fontsize=24)
......