Ana səhifə

Atlas lar Back-End Electronics Installation


Yüklə 1.15 Mb.
səhifə2/2
tarix24.06.2016
ölçüsü1.15 Mb.
1   2



    1. LV lines

Each DC-DC converter that powers the Front-End Crates are powered by 280V power supply modules installed in 8 racks allocated in USA15 through shielded twisted pair cables approximately 70m long. It’s a point-to-point connection (i.e. no bussed distribution of the 280V lines) for a total of 58 cables (32 for the barrel, 26 for the end-cap calorimeters). Additional 8 lines will power the HEC LV boxes.

The cables will be produced in double length, equipped with connectors at both ends and tested:



  • The cables will be equipped with the connectors mating the DC-DC converter input connector (Molex Part Nr #####). A mating part will be also installed to prevent from having exposed terminals at any time during installation and routing of the cables.

  • Before installation of the cables DC tests will be carried to check continuity and look for shorts.

  • Cables will be labeled on both ends with Crate ID (see ATC-OP-QA-0001 for naming conventions) + ATLAS TC-label

  • The cables will be lowered in the pit, cut to proper length and pulled through the cable trays up to USA-15.

  • In USA-15 the second connector will be installed. Again DC tests as previously will be repeated.

  • Cables will be routed to the corresponding rack/sub-rack and connected. Continuity between the cable shield the power supply case will be also verified.

The time and manpower estimation is detailed in [7], section 10


    1. HV cables

The HV units in USA15 are connected to the HV-feedthrough (HV-FT) entrances to the cryostats by multi-conductor (37 wires), double shielded cables (Kerpen, 6KV, 13mm OD). They are equipped at both ends with REDEL-LEMO connectors. Each cable carries 32 independent HV channels and connects one HV module with its mating part on the HV-FT filter crate which is placed directly over the HV-FT. The cables are produced in double length, equipped with connectors at both ends and tested. This operation is done by a subcontractor on CERN site. The tests include connectivity as well as 6 KV HV tests of individual conductors.

Cables are delivered to EMF rolled in caged palettes with ATLAS labels at both ends and connectors protected. Cables are arranged in the palettes (6 double bundles, i.e. 12 cables per palette), which are also labeled. The total number of cables is 54 (Barrel) + 51 (EC-A) + 51 (EC-C) = 156 cables in 13 palettes. The installation sequence should be as follows:



  • Lower a palette to the pit floor (crane or lift)

  • Un-bundle one-by-one the double bundles in pit and cut at half length

  • Pull cables one-by-one starting at FT via cable trays-cable chains, up to USA15-level-2 racks

  • Mounting of the second connector in USA15

  • Labeling: HV module label + ATLAS-TC label

  • HV test of 2 cables by connecting 2 cables at detector end and test system at rack end (could be one reference cable + cable to be tested)

  • Route cables on the back and front faces of the USA15 racks (such that individual modules can be removed

  • Eventually connect to HV-FT and connect to HV-module

The equipment of connectors and the tests in USA15 will be performed by the same sub-contractor making the cables in close contact with the LAr team. The estimation of time and manpower of this work is done in [7], section 10.


    1. DCS cables

LAr DCS has dedicated cables for:

  • CANBus communication and ELMB power (to USA15 DCS rack). These cables are standard items from CERN stores. The ATLAS DCS team is currently considering if the placement of the 2nd connector in USA15 and cable testing should be coordinated centrally for all ATLAS DCS systems.

  • Temperature probe cables from FEC to ELMBs. These are cables from Axon. It has been agreed that these cables will be run with both connectors in place. These cables will be tested before and after installation (Orsay).

  • Purity signal cables from FEC to USA15 DCS rack. These are standard network cables. The 2nd connector and testing will be done by Mainz.

The estimation of time and manpower for different types of DCS cables can be found in [7], section 10.



  1. Level 1 Receiver Station


3.1. L1 Receiver Description

The receiver system is part of the trigger sum chain and interfaces between the Tower Builder (Driver) Boards and Level 1. One function of the trigger sum chain is to convert the signal from energy to transverse energy. In the case of the EM calorimeters, this conversion is carried out to an accuracy of a few percent, whereas in other cases it is correct only to within a factor of two. The final gain adjustment, therefore, is left to the receiver, which must also account for attenuation in the cable between the Tower Builder (Driver) and the USA15 Cavern. Because of the need for continuously variable gain over a relatively small range, a stage of programmable gain is included in the receiver module. This permits fine control over the calibration of the trigger sum signals, which is useful for the Level 1 trigger. In addition, the signal inputs to the receiver system are either fixed in and variable in  or fixed in  and variable in . Level 1 requires the trigger sums to be bundled in   bins and this is done via a daughter remapping board located on each receiver module. Finally the system provides for the selection of any of the raw trigger sum signals for the purposes of diagnostic tests, using special monitoring modules locating in the receiver crates.

The receiver chain consists of four op-amps, the first two of which are located on a 16-channel variable gain amplifier (VGA) daughterboard. The third amplifier performs an RC integration, and the fourth op-amp is a driver circuit. The receiver is transformer coupling at both the input and the output, in order to reduce the sensitivity to ground level differences between the detector and USA15, where the receiver crate is located. Each receiver module services 64 such channels, along with circuitry to select channels for monitoring purposes. A detailed description of the functionality of the receiver system is given in [4].

Each receiver crate contains 16 receiver modules, two monitoring modules, and one controller module. The crate is a standard 9U with a custom backplane, which is used to transport both the digital signals between the controller and the other modules, as well as the analog monitoring signals. The full system consists of a total of 6 (8) receiver crates: 2 for the EM barrel, 2 for EM end-caps, 2 for the forward calorimeters, and hadronic end-caps (plus 2 for TileCal).



3.2. L1 Racks and Crate Layout

The receiver crates are located in the Level 1 trigger racks in row 2, as shown in Figure 1. Figure 2 shows the physical crate layout within the racks for side C of detector. On the far left is shown the rack containing monitoring equipment, then two receiver racks, followed by two L1 preprocessor racks on the right. The second half of the system, which services the ”A” end of the experiment, is identical, except that it is a mirror image of the above layout. Rack numbers 1, 2 and 3 in Figure 2 correspond to rack numbers 6, 7 and 8 respectively in Figure 1 and for the “A” side and 2, 3 and 1 correspond to rack numbers 24, 25 and 26 respectively in Figure 1.

Figure 3 is a schematic drawing of the cabling of signals (trigger sums) onto the receiver system and from the receiver outputs to the Level 1 preprocessor crates. The diagram shows the receivers from one end of the detector, which represents one half of the total system. There will be two receiver crates per rack, so the entire receiver system will occupy four racks.




3.3. EMF Activities and Requirements

All modules, prior to delivery to CERN, will be thoroughly tested. As modules arrive at CERN, they will pass the simple pre-installation tests for basic functionality at the Electronics Maintenance Facility (EMF) prior to installation into the appropriate crates.

A receiver crate will be permanently stationed at the EMF. The 9U crate will include our custom backplane and a tested crate controller module, to be used for tests and troubleshooting of the receiver and monitor modules. We have ordered 6 new crates in 2003.

One crate will be delivered early 2004 and will be used in the combined test-beam runs. A second crate will arrive mid-year 2004, and be sent to the EMF for use in our receiver test station. In addition to the crate, some control and diagnostic equipment will be needed, such as an ADC/MUX, standalone DAQ system and a control/readout PC. In total we will require enough space for a full sized rack, storage cabinet for spares, and a workbench, in total a 3 3 m work area.

The primary purpose of the receiver test crate in the EMF will be to diagnose problems that occur in the system during data taking. At least one technician permanently staffed at the EMF will be trained in the details of operating the receiver test station and correcting simple errors such as replacing daughter-boards or complete modules. This technician will also determine which problems are significant enough to require shipment back to the University of Pittsburgh for more substantial repairs. We will stock sufficient spares in the EMF. Prior to shipment to CERN, the receiver system components will go through rigorous testing. However simple tests, such as channel continuity and control communication checks will be preformed on all incoming modules prior to installation.


    1. Receiver Installation

The procedure described in this section assumes that all the needed cables have been installed and the front-end electronics (FEBs and trigger boards) are operating and debugged. Only the final installation in USA15 will be discussed. The installation procedure of the receiver system into USA15 can be broken down into two main steps:

  • Physical installation of receiver/monitor modules into crate and cabling

  • Testing of individual trigger sum paths and timing study of trigger sums

Then, cabling, both at the input of the receiver modules and at the outputs to Level 1 processor will take place. It is assumed that all the hardware required for cable stress relief has been designed and installed onto the racks prior to this time. Approximately two crates per day can be completed. Therefore a total of up to 2 weeks for the full system is needed. The manpower requirements are two fulltime persons: one senior physicist and one graduate student.

The major amount of time will be dedicated to the checking of the trigger sum cable paths and studies of the relative timing between the four layers that make up a trigger sum and those that are constructed from two physically distinct detector elements such as the barrel and end-cap. In addition, the trigger sum branch and the main data acquisition branch must be tested simultaneously so as to check the data consistency between these two paths. It is assumed that the software and hardware needed for these tests have been written, installed and debugged prior to performing these tasks and that the LARG calibration system is operational and running.

For commissioning, the trigger sum signals are digitized in a special ADC located in the monitoring station VME crate, whose controller will be connected to the ROD system controller via Ethernet. To check that the trigger sum cable paths are correct requires that only individual channels be excited with a calibration pulse at one time. With over 6000 channels of the LARG trigger sums, the process must be automated and not take much longer than ~1 minute per channel. At this time the consistency between the trigger path and the readout path will also be checked. In total 2 weeks involving two fulltime persons will be needed to complete this task.

For the timing studies, blocks of trigger sums can be checked simultaneously; therefore one crate will be tested at a time were only one layer of the total four are excited with a calibration pulse. Assuming this procedure will have to be preformed a few times before the timing is corrected and tested, it could take up to 2 days per crate to complete or approximately 2 weeks for the full system with two full time persons. Both steps of the testing, trigger sum paths and timing, will require dedicated use of the readout and calibration system for about one month with one 8 hour shift per day.

In summary, the receiver installation and commissioning process will require 2 full time persons, one senior and junior level physicist, for approximately 6 weeks. Four of those weeks will require dedicated use of the LARG readout and calibration systems for 8 hours per day. In addition, one person will be needed for technical work for approximately 2 weeks.


  1. Readout Driver System


4.1. ROD system layout

The LARG ROD system is described in a set of documents [5]. It consists of 16 VME 9U crates and 6 work stations, to be installed in 8 racks in USA15 level 2. The layout foreseen for the ROD system is shown in Figure 4. In order to reduce interference between subdetectors, each partition is placed in separate rack(s) except of two small partitions – HEC and FCAL. The place of workstations is not critical since they are connected through Ethernet. Nevertheless, it is convenient to place WS near the corresponding partition for debugging and commissioning work.

One rack is fully assigned for services – water cooling system and test crate. The cooling system provides water distribution to ROD modules, control and monitoring of temperature, pressure and water flow. The water station and control unit occupy ~20U space in the bottom of rack 10-16. The water is carried to 1U drawers installed on the top of each ROD crate. Each drawer has 4 independent channels distributing water to a group of up to 4 ROD modules. The rest of the rack is reserved for test tools needed for effective diagnostics and debugging during the system installation and commissioning.

The front-end links end up in the patch boxes installed in the bottom of racks 11-16 … 17-16. The cables are split there into ribbons and path through front panel to corresponding crate. They are further split near the crate to individual fibers to be plugged into ROD modules. The detailed description of the FEB to ROD connection scheme is available in the ROD installation web page [6].





4.2. Preparation for installation

All ROD parts arrive to EMF storage area at least one month before installation. In the EMF all parts are labeled and recorded in the installation data base. All modules and crates must pass dedicated pre-installation tests before moving to USA15. These tests include the following steps:



  1. Perform basic tests of the crate: water leakage test, power up-down sequences, booting and configuration, temperature and power consumption conditions.

  2. Equip ROD crate with all modules corresponding to the configuration to be installed.

  3. Test the full crate functionality by injecting predefined input data from Injector modules. It includes the monitoring data flow through VME bus and optical output data transfer. There are only 4 readout channels available in the ROD test bench in EMF, so only one slot can be tested at once.

  4. When a crate has passed all tests, the crate configuration is recorded in the installation data base. This is a preliminary record since the crate configuration can be changed later on during tests in USA15.

  5. Dismount all modules and power supply units and pack them for the transfer to USA15.

The manpower needed for this work can be estimated as 2 man-days for technical work and 3 man-days for engineer/physicist per one ROD crate.
4.3. Services in USA15

The installation in USA15 will start with water cooling station in rack 10-16 and water pipes mounting in ROD racks. The cooling system has to be functional by the time of the first ROD board installation, therefore this work will be done prior the ROD crates installation. The cooling system will be mounted in 3 steps:



  1. Install cooling station, connect to DCS and perform functional tests

  2. Mount pipes in ROD racks, check water flow and possible leaks with a simple bypass

  3. Install drawers, plug input and output pipes and check couplers for pressure

The next step will be installation of 6 partition masters. The computers are needed in the earliest stage of ROD installation for basic crate tests. Since the ROD installation will last several months, not all 6 PCs will be needed at the same time. Most likely, the computers will be ordered and installed following the schedule of partitions installation. This scenario will minimize the usage of obsolete equipment. The installation of PCs is a well known and well defined procedure. It is basically the system and software installation and configuring.

It is assumed that the external systems, like TTC and LARG DCS are already operational and ready for ROD installation. In the case if readout system is not available, the ROD output data can be read by FILAR cards installed either in one of partition masters or in a dedicated PC placed in service rack 10-16.


4.4. ROD crate installation

The ROD system will be installed crate by crate following the front-end electronics installation needs. The ROD crate installation is performed in the steps similar to pre-installation tests in section 4.2:



  1. Mount crate and power supply, connect water pipes to LV unit and check water leak and flow. Connect to ground and mains, check temperature and consuming power

  2. Connect crate CANBus line to LARG DCS work station and make CANBus functional test

  3. Install CPU and perform basic VME tests

  4. Install injector, TBM and SPAC modules, and check their functionality. Connect TTC lines and test the signals distribution. Check SPAC functionality

  5. Equip ROD crate with selected ROD and TM modules. Connect Glink cooling pipes and check Glink temperatures and crate power consumption

  6. Make tests of ROD VME access and VME data transfer

  7. Test the crate functionality by injecting input data from Injector module and reading optical output data

The last two tests are not completely defined today. They will be better specified after gaining experience from back-end crate system test, beam tests performed in summer 2004 and first pre-installation tests in EMF.

The manpower and required for full ROD installation can be found in LARG Installation Tasks and WP table [7], section 7. In total the estimation is 87 technical man-days and 116 man-days of engineer or physicist for all partitions. These numbers do not include the common front-end and back-end parts commissioning.




  1. High voltage system

The Lar HV system [8] is housed in sub racks located on USA-15 level 2. Five racks are needed to hold all crates and ancillary equipment. Their foreseen layout is sketched in Fig. 5. The sub-racks are air cooled in the closed racks. We rely on the heating power extraction by the water cooling circuit of the rack heat exchangers.

The sub racks house HV units (at most 8 in each) each one delivering 32 independent HV channels on one multi-pin REDEL-LEMO connector. Each HV unit is controlled by 2 CAN nodes each one serves 16 channels. Thus a group of four sub-racks completely fills the address space (0…63) of one CAN-bus line. The CAN bus lines are driven by three PC’s equipped with PCI-CAN interfaces, each one running two separate CAN lines.

The PCs (atlar-hv1,2,3) are 19” rack mounted housed in the same racks as the HV sub-racks. One additional PC (atlar-hv0) serves as disk data server for all three others. The configuration will be most probably of one client (hv0) communicating with the servers running in the hv1,2,3 machines. These serve the data points of the HV modules seen by the two CAN bus lines.



In addition to the USA-15 system, the installation of the HV filter boards on the cryostat HV feedthroughs need to be performed. They are mandatory to perform any HV test on the calorimeters. The first HV test will happen immediately following the cryostats transport and lowering.

The following activities are foreseen in EMF:


  • Shipment to CERN. All HV sub-racks and modules are currently at CERN, with most of them being in use in the integration effort in B180.

  • Storage in EMF The HV modules and sub-racks will be labeled with ATLAS identifiers and functional labels. They will be declared in the e-production DB and in the ATLAS installation DB.

  • Pre-installation tests. No further tests are foreseen at this point. However, one crate, equipped with an absolute calibration system will be available for periodic precision recalibration of the HV modules and individual module debugging.

The HV system should be installed in one go. Since however some sub-racks are needed in B180 for testing the cryostats, probably a partial system (sub-racks and modules not in use there) will be installed first and tested in USA-15 long before any USA-15/UX-15 cables are available.

We estimate a total of 8 days by a 2 person team required for physical sub-rack and 15 days for module installation. Other minor points to be covered are the installation of the control PCs and their connections to network. Basic tests include powering and cooling verification. CAN communication with the PCs needs to be verified.

Tests will be done in two steps. First the HV system as such needs to be operational with its multi level client-server control software. After cabling has been setup and routed to the racks front faces, connections are made to the HV feedthroughs on the cryostats. Only then, full commissioning takes place, first with (lower) HV values while the cryostat volumes are filled with gas, later with HV when in LAr. This process is repeated with each one of the three cryostats.




  1. Low voltage power supplies

The low-voltage power supply system consists of 60 independent power units: 58 for FECs and 2 for HEC LVPS. Figure 6 depicts a block diagram of the subsystem powering the 58 Front-End Crates. The same applies to the 2 HEC LV PS boxes.

8 racks are allocated in USA-15 level 2 for the power supply distribution. The layout of the racks will be implemented so that each of them will power only Front-End Crates (or HEC LVPS boxes) from the same calorimeter side for a total of 4 logic partitions. More precisely:


  • Rack 16-22: 8 280V units for EM Barrel side A

  • Rack 16-23: 8 280V units for EM Barrel side A

  • Rack 16-24: 8 280V units for EM Barrel side C

  • Rack 16-25: 8 280V units for EM Barrel side C

  • Rack 16-26: 7 280V units for EC side A

  • Rack 16-27: 6 280V units for EC side A and 2 units for the HEC LV

  • Rack 16-27: 7 280V units for EC side C

  • Rack 16-28: 6 280V units for EC side and 2 boxes for the HEC LV

The detailed layout of each rack including the location of the ventilation, fan trays and heat exchanger units has still to be determined and will depend on the final mechanical specifications of the 280V PS modules.

The following activities are foreseen in EMF:



  • 280V PS units will be delivered directly from the manufacturer to CERN. The manufacturer will deliver with the unit a Production Unit Traveler Sheet proving the conformity of the unit to the specifications

  • Each unit will be labeled and data inserted in the Electronics Production Database

  • Each unit will be stored and tested at CERN before installation. The tests will include:

    • Power-up/down sequence at the nominal settings with determined load

    • Test of the monitoring and control functionalities (voltage and current monitoring, GND fault detection, interlock activation)

    • Test results at EMF will be also included in the Electronics Production Database

Installation of all the units of the 4 logical partitions should occur at once. However final tests are conditional to the installation of the LV DC-DC converters as well as to the installation of the Front-End Crate boards in UX-15, therefore tests should be organized by partitions and coherently with the testing of the Front-End Crates.

Here follows a short description of the installation sequence:


  • Rack preparation and mechanical installation of the 280V units:1 tech for 8 days

  • Routing and installation of the DCS interface: 1 tech for 4 days

  • DCS tests:1 phy/eng. during 4 days

  • Routing and connection of the 280V PS cables (including second connector installation and basics tests): 1 tech during 5 days



  1. LAr detector control system

The LAr Detector Control System (DCS) system is described in [9]. It will be a hierarchical system with components located inside the cryostats, in the front-end crates, in the tile-calorimeter finger regions, on the cryogenics platform in UX-15 and in the LAr DCS rack in USA15. These components are integrated into the complete monitoring and control system, which is intended to maintain safe and reliable operation of the LAr calorimeter.

The principal LAr DCS tasks are:


  1. Overall LAr subsystem DCS control, interactions between LAr DCS and DAQ, database writing

  2. Module temperature monitoring

  3. Liquid Argon purity monitoring

  4. FEC low voltage monitoring and water cooling monitoring

  5. 280 Volt power supply monitoring and control (see section 6)

  6. HEC low voltage electronics power supply monitoring and control

  7. High Voltage power supply monitoring and control (see section 5)

The LAr DCS system makes heavy use of the Embedded Local Monitoring Board (ELMB) developed by the ATLAS central DCS team. It is a multi-purpose board with analogue, digital, and CANBus sections. The analogue section has 64 differential ADC channels, used in LAr for monitoring voltages and PT100 temperature-sensitive resistors. The digital section contains a CPU, which controls the functions of the ELMBs, and also contains a number of digital input/output channels. The CANBus section is used to configure, control and readout the ELMB. These three sections are optically isolated, and can be powered independently.

The LAr DCS system makes heavy use of the Embedded Local Monitoring Board (ELMB) developed by the ATLAS central DCS team. It is a multi-purpose board with analogue, digital, and CANBus sections. The analogue section has 64 differential ADC channels, used in LAr for monitoring voltages and PT100 temperature-sensitive resistors. The digital section contains a CPU, which controls the functions of the ELMBs, and also contains a number of digital input/output channels. The CANBus section is used to configure, control and readout the ELMB. These three sections are optically isolated, and can be powered independently.

The LAr DCS system uses rack 16-18 in USA 15. The layout of this rack is shown in Fig. 7. The EMF will not be used heavily by LAr DCS, and most of the components will already be present at CERN before the start of installation. Most of the LAr DCS subsystems will be developed and tested in the FEC tests at BNL, detector cold tests in Building 180, or the 2004 combined beam tests.



The LAr DCS system will be installed and tested one subsystem at a time, with each system running independently. The FEC low voltage monitoring and control will in particular be operational during the complete LAr electronics installation period. A possible installation and testing sequence would be:

Overall LAr DCS (one person for 6 months initially, plus continuous availability during installation period)

  • Install PVSS workstation in LAr DCS rack in USA15

  • Install CANBranch/ELMB power crates in USA15

  • Test communication with LAr DCS LCSs, when possible

  • Test communication with ATLAS global GCS workstation

  • Implement all data points exchanged with LCSs and GCS

  • Implement communication with LAr DAQ systems when possible

  • Complete implementation of DCS finite-state-machine protocol

  • Define user display and interaction panel in PVSS

LAr module temperature monitoring (1-2 people for entire 1 month initially, plus one person during each temperature FEB installation, and availability during cool down)

  • Install PVSS workstation (Temperature LCS) in LAr DCS rack in USA15

  • Install ELMBs in cryostat platform area

  • Install ELMBs in EC FEC region when possible

  • Test communication between ELMBs and Temperature LCS computer

LAr purity monitoring (1-2 people for entire 1 month initially, plus 1 person during each purity FEB installation)

  • Install PVSS workstation (purity LCS) in LAr DCS rack in USA15

  • Install purity crate and electronics in USA15

  • Test communication between front-end and back-end boards, and with purity LCS

FEC and 300-volt power supply monitoring and control (1-2 people for entire FEC installation period)

  • Install PVSS workstation (FEC LCS) in LAr DCS rack in USA15

  • Install ELMBs in FEC PS area, including cabling of CANBus lines to USA15

  • Install ELMBs in USA15 for 300-volt PS monitoring and control

  • Test communication between ELMBs and FEC LCS computer

  • Test readout and control lines between ELMBs and power supplies

HEC LV system monitoring and control (1-2 people for 1 month for each EC)

  • Install PVSS workstation (FEC LCS) in LAr DCS rack in USA15

  • Test communication between HEC PS “boxes” and HEC LCS computer



  1. Summary




    1. Back-end test benches in EMF

The minimal needs of test equipment in EMF dedicated for back-end subsystems are shown in Table 4. In total three test benches can be located in 4 racks. Extra working space is needed for 5 computers. These benches are used for pre-installation tests of back-end electronics and provide control, readout and monitoring of corresponding front-end parts.
Table 4: EMF test benches for back-end electronics.

Subsystem

Racks

Crates

PC

Function

L1


1

2

1

Tests of receivers, TBB, TDB and L1 cables

ROD

2

2

2

ROD tests, FEB checks

HV

1

1

1

Tests of HV modules and crates

LV

1

0

Tests of LV supplies, FEC

DCS

0

1

DCS tests, support all benches in EMF




    1. Manpower estimation for installation in USA15

Table 5 presents the summary of manpower estimations for operations in USA15. The common work with front-end installation crew is not included here.

Additional time and manpower are needed for tests and preparations in EMF. It is difficult to make a reliable estimation for this work since most of the pre-installation tests are not specified today.


Table 5: EMF test benches for back-end electronics.

Subsystem

Technical

Engineer/physicist

Person

Day

Man-day

Person

Day

Man-day

L1 cables


3

60

180

3

60

180

FE links



















HV cables

1-2

39

72

1-2

24

42

LV cables

1

11

11

1

4

4

DCS lines


















L1


1

10

10

2

30

60

ROD

1-2

50

87

2-3

50

116

HV

1-2

38

61

2

26

52

LV

1

14

14

1

14

14

DCS




















References

[1] LAr FE electronics installation


[2] The responsibility table is posted at the ATLAS Installation web area:

http://atlas.web.cern.ch/Atlas/TCOORD/Activities/TcOffice/Scheduling/Installation/

LAr_inst_resp.html
[3] L1 trigger cables PRR
[4] W.E. Cleland, B. Liu, J. Rabel, and G. Zuk

Receiver/Monitor System for the ATLAS Liquid Argon Calorimeter.

CERN EDMS Document ATL-AL-EN-0043 v.3.

[5] LAr ROD system documentation

http://atlas.web.cern.ch/Atlas/GROUPS/LIQARGON/Electronics/Back_End/index.html
[6] LAr FEB to ROD connections.

Document in preparation


[7] LAr Electronics & Cables Installation Tasks

http://atlas.web.cern.ch/Atlas/GROUPS/LIQARGSTORE/II/Installation /LAr_Elec_Cab_Inst.pdf


[8] http://atlas.web.cern.ch/Atlas/GROUPS/LIQARGON/Electronics/ Power_Cooling/ HV_supplies/HV-Channels_ALL.htm
[9] LAr DCS System Overview, 3 Feb 2004

http://atlas.web.cern.ch/Atlas/GROUPS/LIQARGON/Electronics/Monitoring/



* In this Note we refer to sub-rack as a unit to be inserted into rack and module as a unit to be inserted into sub-rack

** The schedule is still to be defined, it will basically depend on the delivery of components

1   2


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət