FY07-Q1 Effort Report of Marco Mambelli

This effort report covers the period of activity from October-December, 2006. The deliverables and milestones achieved are as follows:

  • USATLAS Tier2 Data Service activity.
    • This activity involves a joint group from US-ATLAS Midwest Tier2 (University of Chicago and Indiana University) and Argonne National Lab (Robert Gardner, Dan Schrager, Jack Cranshaw, Tom LeCompte, David Mallon, Sasha Vaniachine, Jerry Gieraltowsky, Ed May)
    • Goals of the activity are the design, prototyping and packaging of a service that could be deployed at ATLAS Tier2s and would include the following functionalities:
      • Provide users the ability to access the Tier2's DQ2 server.
      • Host or provide access to ATLAS specific database services, such as TAG and possibly conditions (IOV and calibration) databases.
      • Provide a skimming service for Tier2-resident datasets through either command line or web interfaces (Dataset Skimming Service - DSS).
    • Group coordination (chairing meeting, taking minutes)
    • Planning, scope definition and initial organization
    • Prototype machine initial deployment
      • software installation
      • testing (with Jack Cranshaw)
    • Deployment of initial version 0.1 of the Dataset Skimming Service on UC_ATLAS_MWT2
    • Functionality test of DSS on a small data sample (part of testIdeal_07.005711) documented in SkimTest061212
    • Initial development of the Data Movement Utilities component of DSS to move data reliably between local SEs (Storage Elements) and worker nodes of CEs (Computing Elements), including file registration in the DQ2 servers

  • Programming for USATLAS PanDA executor
    • Development of the JobScheduler in collaboration with Xin Zhao and Paul Nilsson PandaJobScheduler
      • Development of the JobScheduler itself
      • Support (mostly help in testing and debugging) of Xin and Paul that are working on the Pilot development.
      • Provide the same API of the old moving functions in DQ2ProdClient2.py to replace them transparently with DMU (component of DSS) so that no change in other Panda components id required.
    • Packaging of the PanDA packages (PandaJS, PandaSrv, PandaJDE, Panda, auxiliary DQ-Client), 0.2.x versions, and installation and upgrades of the production submit host (only ATLAS submit host, not the OS) on the UTA machine: atlas002.uta.edu.
  • Troubleshooting and support of USATLAS production activity
    • Troubleshooting of different problems involving
      • job submission at gatekeeper
      • file transfer
      • pilots execution
  • Running shifts. Since the end of October I've been the only maintainer of a production submit host (tier2-06.uchicago.edu) submitting pilots to clusters with advanced Storage Elements (e.g. dCache not at BNL, anything different from NFS shared disk). This submit host is running pilot2 pilots, developed by Paul and Xin, that are providing also new logging and recovery features. This host has been used also for testing of new CEs and debugging of pilot2 and DMU.

  • MWT2 activity
    • Support to the deployment of the new cluster: MWT2
    • Troubleshooting of different problems
      • DQ2 operation (cleanup, troubleshooting)
      • Job submission and queue management

  • No Capone in October - The code and the documentation of the former USATLAS executor Capone is kept available for studies and comparisons:
    • Release 1.2 is still the current one

  • Activity within the ATLAS worldwide collaboration
    • Involvement in the ATLAS ProdSys protocol maintenance
    • Involvement in the effort to define common ATLAS error codes returned by the production system
    • Participation in the ATLAS monitoring effort lead by John Kennedy (FZK-CERN).

  • Grid activities (OSG and Interoperability efforts)
    • Participation in the GLUE Schema v1.3 effort:
      • Publishing of SRM-based SEs information
      • Involvement in the GIP (Generic Informaiton Provider) activity within OSG
      • Discussion of changes for GLUE 1.3
    • Participation in the CE storage activity within OSG to provide guidance on how to use disk space from within the CE environment.

-- MarcoMambelli - 10 Apr 2007
Topic revision: r1 - 10 Apr 2007, MarcoMambelli
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback