JIVE e-VLBI REPORT EVN TOG MEETING, September 2008, Bologna 12 September 2008 Arpad Szomoru (extracted from correlator report - rmc) Since November 2007 a number of successful e-VLBI science runs took place (although fewer than we would have liked). Many technical developments were tested and several high-profile demos took place. The April 8 run featured a record-breaking >12 hours uninterrupted subjob at 512 Mbps, demonstrating the greatly improved stability and reliability of the correlator system. The JIVE-developed Mark5 control code (with the option to interactively drop packets to fit the data stream to the available bandwidth) was further improved and has become the de facto operational control code for e-VLBI operations. Use of the UDP protocol (modified to take account of missing and/or out-of-order packets) made reliable 512 Mbps transfers possible. Packet dropping has pushed the data rates to near-Gbps, at least to the stations with upgraded Mark5 motherboard/CPU combinations. A further modification to the packet dropping algorithm, ensuring that only packets containing data are dropped, while leaving the headers intact, greatly improved the behavior of the correlator during high-data-rate e-VLBI. All this, combined with the hardware upgrades of the Mark5 units at Medicina and Onsala, has enabled stable data transfers at 920-980 Mbps from all European e-EVN stations. Simultaneously, the possibility of dropping data from a specific set of individual (VEX) channels, instead of data across all channels, was developed. The algorithm has been implemented and, although CPU intensive, was shown to work on local machines. Tests with formatter data however revealed that it is essential for the Mark5s at the stations to run SMP-enabled Linux kernels, to make full use of the available CPU power. A real test of the channel dropping algorithm has not been done yet, as this kernel upgrade has not yet been implemented at many stations. Related to this, changes were made to the correlator control software to allow different configurations at different stations, providing an additional tool to adjust data rates. A first test was done on August 28 with dynamic scheduling, in which a schedule was changed at JIVE during an observation, the new schedule file merged with the old one, distributed to the stations, DRUDG'ed locally (via ssh from JIVE) and run at the stations. The changes were made at Tr and Wb, with Jb staying on the original schedule, and as planned fringes between Tr and Wb reappeared after the change. No new software had to be installed at the stations; the ssh commands were executed at stations using ssh in single-command mode from scripts run at JIVE. On July 22 a special test was done involving the MERLIN telescopes at Cambridge, Darnhall and Jodrell Bank (Mk2). In the current MERLIN network, the 'out-stations' are connected to Jodrell Bank by microwave links that have about 128 Mbps throughput. Paul Burgess from JBO connected the links from both Darnhall and Cambridge to the VLBA terminal. The VLBA terminal has 4 IF inputs, so each IF received data for one polarization from either Darnhall or Cambridge. The IF sampled data from both telescopes is then run through the formatter and Mark5 and transmitted to JIVE. At JIVE, the 'port monitoring' functionality of the central JIVE switch/router was used to 'snoop' on all the networking traffic towards one Mark5 and send duplicates to a second Mark5. With this setup we were able to achieve fringes between all three stations at the same time. This experiment was repeated on the 9th of September, this time using IP Multicast to perform the packet duplication without having to undertake major networking changes at JIVE. This resulted in the first real-time fringes to the Knockin station at MERLIN. This technique has the potential to significantly improve the sensitivity of the e-EVN to larger scale structures. Transfers at a full 1024 Mbps are now possible between Westerbork and JIVE. A single data stream is divided in round-robin fashion over two independent 1-Gbps lightpaths, and recombined at the receiving end. Tests have shown that transfers of 1500 Mbps are easily sustained in this way. Using the new 10-Gbps connectivity between SURFnet and Onsala, which will among others be used for Onsala - eMERLIN transfers at 4 Gbps, we were able to transfer data from On to JIVE at 1500 Mbps. 1024 Mbps production e-VLBI however will only become possible when 10G equipment is installed in the On Mark5. We soon hope to connect Ef, Tr and Jb at 10 Gbps as well. Arecibo made its re-appearance in e-VLBI on 5 February, with data transfers of up to 100 Mbps (with packet dropping). Later that year a lightpath between Arecibo and JIVE was put in place, allowing 512 Mbps transfers, (at least during pre-agreed time-slots). Effelsberg joined the e-EVN in February 2008, with a successful transfer of formatter data at 256 Mbps. First fringes followed on 1 April, both at 512 Mbps and at near-1 Gbps. During these tests a 1 Gbps connection from Bonn into GEANT2 was used, later on we switched to a dedicated lightpath via a 10 Gbps connection to Groningen (shared with e-LOFAR). In an unexpected development Hartebeesthoek became the next station to join the e-EVN. A 1 Gbps connection between Hh and the nascent South African NREN, SANReN, in Johannesburg, and from there at 64 Mbps via London to JIVE, became available in May 2008. Hh then participated in two very successful demos, of which one was rather ad-hoc, organized for the visit of a high-ranking EC delegation to the Hh telescope site, and the other the high-profile TERENA 2008 demo, in Bruges, Belgium. This last demo produced fringes between TIGO, Hh, Ar, Ef, Wb, Mc and On, effectively a 4-continent correlation. The cost of the connection from Johannesburg to London however is still prohibitive, which means that it had to be discontinued at the end of May. This situation is expected to improve fairly soon, and hopefully Hh will be able to participate at full bandwidth as early as 2009.