Tags:
tag this topic
create new tag
,
view all tags
<img width="800" alt="t1-jinr-hall-light.jpg" src="http://lxs-s03.jinr.ru/twiki/pub/Main/t1-jinr-hall-grad-light.jpg" style="vertical-align: top;" title="t1-jinr-hall-grad-light.jpg" /> ---++ Networking <!-- p { margin-bottom: 0.1in; line-height: 120%; } --> One of the most important components of JINR and MICC providing access to resources and the possibility to work with the big data is a network infrastructure. High-speed reliable network infrastructure with a dedicated reserved data link to CERN ( *LHCOPN*). LHCOPN composed of many 10 Gbps links interconnecting the Tier-0 centre at CERN with the Tier-1 sites fulfilled its mission of providing stable high-capacity connectivity <img src="%ATTACHURL%/Net1.jpg" height="360" width="600" align="right" hspace="20" vspace="20" > The network architecture of the Tier-1 at JINR was constructed with<strong> a double route</strong> between the access level and the server level Each server have access to the network segment by *two* equivalent<strong> 10Gbps</strong> links, with a total throughput of *20 Gbps.* The connection between the access level and distribution level has *four 40 Gbps* routes, which, consequently, allow data transmission 160 Gbps, the oversubscription being 1:3. The Tier-1 network segment is implemented on the<strong> Brocade VDX 6740</strong> capable of data communication with more than *230 10-Gigabit Ethernet ports* and *40 Gigabit Ethernet ports*. The series comprises models with optical and copper <strong>10 Gbps </strong>ports and *40 Gbps uplinks* *2019:* <strong><span style="color: sienna;">* Hardware </span></strong> * Local Area Nertwork (LAN) *2x10Gbps*, planned upgrade to *100Gbps* * Wide Area Network (WAN) *100Gbps*, 2x10Gbps, upgrade WAN to *nx100Gbps* planned *2018:* <strong><span style="color: sienna;">* Hardware </span></strong> * Local Area Nertwork (LAN) *10Gbps*, planned upgrade to *100Gbps* * Wide Area Network (WAN) *100Gbps*, 2x10Gbps, upgrade WAN to *2x100Gbps* planned *2014:* <strong><span style="color: sienna;">* Hardware </span></strong> * 10xSBM-GEM-X2C+ in top-on-the-rack 5 Processor blade. * 4xProcurve 3500yl-24G at disk and infrastracture servers. * Procurve 5406zl - backbone. * 2x1GbE TRUNK between machinies and SBM-GEM-X2C+ and Procurve 3500yl-24G. * 1x10G beetween SBM-GEM-X2C+/Procurve 3500yl-24G и Procurve 5406zl. * 1x10G beetweenProcurve 5406zl and Border Gateway JINR. * 2x10G beetween Border Gateway JINR and IX -- Main.TWikiAdminUser - 2014-08-12
E
dit
|
A
ttach
|
P
rint version
|
H
istory
: r7
<
r6
<
r5
<
r4
<
r3
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r7 - 2019-02-28
-
TWikiAdminUser
Home
Site map
Main web
Sandbox web
TWiki web
Main Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Register User
E
dit
A
ttach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback