--+!!MeetingNotes070418

Attending

  • Jack, Jerry, Marco

Agenda

  • DSS Status and Progress
  • Last week action itams
  • ...

Discussion

Development meeting.

Jerry provided a new version of the scripts: ~gfg/rje/dss_software.tar.gz on tier2-06.uchicago.edu

Base on the contents of the configuration file "DSS.conf", the dss_software will perform a skim based on a user query. It will partition the output collection into multiple sub_collections based split by GUIDs. The number of events in each sub_collection is a user defined parameter. The dss_software will then create for each sub_collection a PoolFileCatalog.xml file specific to that collection. A condor job will then be submitted for each sub_collection. Where the output files appear for the collection and each sub_collection is also user defined.

There is no easy standard way to know the type of a backend. Using DMU we are piggy-backing Panda. They use some euristics and static configurations that are kept uptodate in order to run production.

The filetype returned by querying LRC seems to be different (NULL or 'empty string'). Is this due to different way of querying or different versions of LRC/DQ2? Would be useful to have something consistent. Ideally ATLAS DDM should report the correct filetype (store it in LRC, send it with the files during transfers, ...)

To store our software we will use subversion. CVS is preferred because we know it already but SVN server is at UC, so we may have more control on it. In the future we may move to CERN CVS, specially if dependencies to-from ATLAS releases will arise. Marco will send an email around mid afternoon today (after checking with Suchandra that manages the server) with info about the repository and pointers to documentation.

Tests with Django. A Web framework that could providea GUI for DSS and be used as application (HTTP server). Initially is a simple server allowing interaction with Python programs and a SQLite backend. For production can use Apache and a different DB (e.g. MySQL)

TAG browser. - Marco found no suitable browser. Will provide a page with some queries

TAG DB tables

Notes from SW week. Ian: 8 to 1 merging for the TAGs (8 AOD files for 1 TAG file) aod2tag should be able to have a list of AODs as inout and produce a single TAG outout Jack will Check this and how general the extraction transformation is.
  • MergeAODwithTags_v12.py can take a list like In=['file1','file2',...]
  • EventExtractor.py is specific to AOD in 2 respects
    • Converters loaded. This could be a bit tricky. It may become easier in the release 13 cycle.
    • Stream name is StreamAODm. This could easily be made configurable.

Next meeting 4/25 at 10am, direct phone call (Jack and Jerry )

-- MarcoMambelli - 18 Apr 2007
Topic revision: r2 - 18 Apr 2007, JackCranshaw
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback