support login
+55 (11) 3022-4694 contato@dcparts.com.br
Menu
Choose a Page

Support and maintenance of DELL EMC storage

 

DC Parts offers a proactive and professional approach, and the most important thing is that we understand the meaning of adding continuous value whenever we speak to customers. We have a lot of knowledge and experience with a specific focus on EMC Clariion, Data Domain, VNX, Celerra, Centera and Isilon systems. This specific focus, combined with the extensive knowledge base we have, ensures that we can keep your DELL EMC systems and keep them running smoothly beyond the manufacturer's 'service end-of-life date' (EOSL).

Third party maintenance of DELL EMC storage

 

By providing organizations with a viable and sustainable alternative option to extend the life of their systems after the announcement of 'end of life' or 'end of service life', above all we can help organizations by offering an economical solution for DELL EMC systems mission-critical. Therefore, this solution significantly reduces your total cost of ownership and also generates a greater return on your initial capital expenditures with IT storage hardware. 

Our consultants can help you customize a suitable DELL EMC support and maintenance service solution that meets your specific current requirements. Our support service gives you the flexibility to pay only for the technical and hardware resources you need for as long as you need. Base IT recognizes that companies do not want to enter into annual maintenance support contracts that they cannot justify, just to provide temporary support. An example would be a data / platform migration and decommissioning project.

Get in touch with us to see if we can provide the guarantees you need, especially to explore an alternative maintenance support service.

Residual asset value for your DELL EMC hardware

 

Our exclusive DELL EMC storage maintenance and support service can provide you with the potential to customize a cost-neutral maintenance contract for DELL EMC storage that is now post-warranty, end of life or end of life (EOL / EOSL). Our service allows customers with DELL EMC hardware to have multiple stages in a lifecycle program, while providing extended maintenance support, regardless of the need for 3,6,9 or even 12 months of support, thereby effectively creating a cost-neutral solution.

 

Cost reduction in your contract

 

We allow a cost-neutral maintenance solution to be provided through the residual value on your DELL EMC storage hardware that we would remove at the end of the agreed maintenance support term. As well as services for secure deactivation and elimination of data according to internal compliance requirements will also be part of the discussion of the hardware lifecycle services program. 

If you don't need a support service, our maintenance service can still benefit you. So we can provide a residual value quote for your IT server and storage hardware and discuss the process of deactivating and deleting data, allowing you to get a much higher return on your initial capital investment. There is no obligation to obtain a quote from DC Parts, but it does allow you to explore whether this service can be viable and beneficial for you.

Find out more by going to contact us.

DC Parts has a friendly team of consultants who will be able to quickly and efficiently understand your needs and provide you with various solutions that may be suitable and viable for your short or long term needs. As we are a fully independent agnostic company and seller, we can provide you with unbiased advice that you will not be able to find elsewhere. Our goal is to provide an alternative and economical option for your current or end-of-life (EoL) EMC Clariion array. Our technical team has substantial knowledge of these systems, from the implementation and installation of additional hardware storage capacity, to the system configuration and redeployment requirements, to the ongoing maintenance and support of Clariion.

Our EMC Clariion support provides capacity for short-term maintenance requirements. You may be starting a decommissioning project or a migration project that requires temporary support. The ability to have a continuous support contract adapted to your needs, mitigating any associated service risk during the project, is fundamental and an economical approach. Since DC Parts is able to support Clariion arrays beyond the end of the manufacturer's lifetime (EOSL), the opportunity to redeploy these systems in environments that have reduced risk associated with them, such as research and development environments, becomes interesting options, especially when you can generate a higher return on investment.

 

Benefits of maintenance support DC Parts in storage EMC Clarion

  • Technical diagnosis and hardware repair EMC Clariion
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • navisphere
    • unisphere
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC Clarion, but not DC Parts

EOSL Dates 'End of Life Useful '

  • CX200 - SEPTEMBER 2009
  • CX300 - JUNE 2013
  • CX400 - SEPTEMBER 2009
  • CX500 - DECEMBER 2011
  • CX600 - SEPTEMBER 2009
  • CX700 - DECEMBER 2011
  • CX3-10C - MARCH 2014
  • CX3-20 / F / C - MARCH 2014
  • CX3-40 / F / C - MARCH 2014
  • CX3-80 - MARCH 2014
  • CX4-120 - DECEMBER 2016
  • CX4-240 - DECEMBER 2016
  • CX4-480 - DECEMBER 2016
  • CX4-960 - DECEMBER 2016

The EMC Celerra array is now a discontinued NAS device, produced by DELL EMC. It was available as an integrated unit or as a NAS header that could be added to an independent DELL EMC storage array, such as a Clariion. It supported file-level access using CIFS, NFS and MPFS and additional FTP, NDMP and TFTP protocols. A Celerra unified storage array uses an EMC Clariion storage array as its storage layer, which also provides iSCSI and Fiber Channel block-level storage. 

Celerra runs on a real-time operating system called Data Access in Real Time (DART). DART is a modified embedded UNIX kernel (32 Mb only) with additional functionality, such as Fiber Channel driver for HBA and Bonding For Ethernet, added to operate as a file server. Celerra is based on the same X-blade architecture as Clariion and was available with a single X-blade data mover or with multiple data movers in an active-passive N + 1 configuration.

DC Parts has specific technical experience and knowledge of the EMC Celerra array, especially if you have a unified system configuration that combines Celerra and Clariion technologies. If you need ongoing maintenance support on a stand-alone EMC Celerra or a unified system, our consultants and technical staff can help you further. 

 

Benefits of maintenance support DC Parts in storage EMC Celerra

  • Technical diagnostics and hardware repair EMC Celerra
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • Celerra Manager
    • DART 'Data Access Real Time'
    • navishere
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC Celerra, but not DC Parts

EOSL Dates 'End of Life Useful '

  • NS20 - JULY 2014
  • NS40 - JULY 2014
  • NS80 - JULY 2014
  • NS80G - JULY 2014
  • NS-120 - DECEMBER 2016
  • NS-480 - DECEMBER 2016
  • NS-960 - DECEMBER 2016

The EMC Centera Content Addressable Storage (CAS) platform for data archiving can be a DELL EMC platform you currently use, as it provides organizations with content authenticity, governance and compliance, long-term retention and high availability with maximum efficiency for archiving data. Dice. 

Are you currently looking for a viable option for ongoing maintenance and support on your Centera platform with no service risk? DC Parts is a maintenance and technical support provider that can provide you with additional time for due diligence on your filing platform and bring the necessary impetus from manufacturers looking to sell you the latest and greatest filing solution to a considerable cost!

DC Parts is a company with technical experience around the Centera RAIN architecture (reliable / redundant matrix of independent / cheap nodes) and the centrastar operating system. Regardless of whether you need ongoing maintenance and support on your CAS platform or temporary support to mitigate service risk during a migration project, one of our consultants will be happy to assist you.

 

Benefits of maintenance support DC Parts in storage EMC Centera

  • Technical diagnostics and hardware repair EMC Centera
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • Center
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC Centera, but not DC Parts

EOSL Dates 'End of Life Useful '

  • CENTERA GEN 1 - DECEMBER 2009
  • CENTERA GEN 2 - DECEMBER 2009
  • CENTERA GEN 3 - MARCH 2011
  • CENTERA GEN 4-1.2TB - FEBRUARY 2013
  • CENTERA GEN 4-2.0TB - FEBRUARY 2013
  • CENTERA GEN 4-LP 3TB - FEBRUARY 2014
  • CENTERA GEN 4-LP 4TB - 
  • CENTERA GEN 4-LP 8TB - 
  • CENTERA GEN 4-LP 12TB - 

First generation EMC VNX storage devices have reached the end of the manufacturer's warranty period, and finding a suitable sustainable maintenance solution can be an important consideration. Considerable investment has been made, whether it be a VNX5100, 5300 or 5500 from a mid ranger to a VNX5700 or 7500 high ranger. It is important to find a practical solution to maximize the return on this investment. DC Parts is able to provide a viable, practical and economical way to continue to support these systems without introducing a service risk.  

The VNX platform, which is a unified system that combines Clariion SAN and Celerra NAS technology with updated architecture, was launched by DELL EMC in the first quarter of 2011. However, we still see several organizations still using the Clariion or Celerra platforms individually. The VNX operating system is built on the same OS architectures used by the Clariion and Celerra systems, which use FLARE and DART respectively. The VNX system uses the new Unisphere management GUI and not Navisphere, which you can still find some Clariion systems using today.

The basic VNX systems, which are the basic VNX-e 3100, 3150, and 3300, were an economical way for IT managers with limited funds to try the new EMC technology, but it may no longer make financial sense for continued manufacturer support in entry-level systems and finding an alternative support solution for those systems can make budgetary sense.

 

Benefits of maintenance support DC Parts in storage EMC VNX

  • Technical diagnostics and hardware repair EMC VNX
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • unisphere
    • powerpath
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC VNX, but not DC Parts

EOSL Dates 'End of Life Useful '

  • VNX5100 - DECEMBER 2019
  • VNX5300 - DECEMBER 2019
  • VNX5500 - DECEMBER 2019
  • VNX5700 - DECEMBER 2019
  • VNX7500 - DECEMBER 2019
  • VNXE3100 - MARCH 2018
  • VNXE3150 - DECEMBER 2020
  • VNXE3200 - JANUARY 2023
  • VNX 5200 - JANUARY 2023
  • VNX 5400 - JANUARY 2023
  • VNX 5600 - JANUARY 2023
  • VNX 5800 - JANUARY 2023
  • VNX 7600 - JANUARY 2023
  • VNX 8000 - JANUARY 2023

The warranties for EMC Data Domain storage systems with deduplication in the past 18 months have started to end at the end of manufacturers' support and many additional models will enter this phase in the coming months. Make sure your model fits into that category. DC Parts is an independent vendor agnostic support and maintenance specialist for server and storage systems. Our focus is specifically designed for these two areas of IT. Unlike many other support providers, we do not dilute our attention and provide our customers with the technical experience you need to continue supporting and maintaining essential systems without risk of service level.

A primary area of ​​focus for DC Parts is the requirement to bring specific flexibility to customer requirements. We understand that most organizations will have invested in a team with skill sets relevant to their current infrastructure and may require only a specific part of a complete maintenance contract to maintain their systems economically. DC Parts consultants would like the opportunity to discuss your specific requirements in more detail and work with you to tailor a service that will continue to maintain, support and operate these systems so that you can achieve a lower total cost of ownership with your initial investment.

 

Benefits of maintenance support DC Parts in storage EMC Data Domain

  • Technical diagnostics and hardware repair EMC Data Domain
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • DDOS
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC Data Domain, but not DC Parts

EOSL Dates 'End of Life Useful '

  • DD410 - NOVEMBER 2010
  • DD430 - NOVEMBER 2010
  • DD460 - NOVEMBER 2010
  • DD560 - JUNE 2011
  • DD580 - DECEMBER 2013
  • DD120 - DECEMBER 2014
  • DD150 - DECEMBER 2014
  • DD530 - DECEMBER 2014
  • DD565 - MARCH 2015
  • DD660 - DECEMBER 2015
  • DD690 - JUNE 2016
  • DD880 - SEPTEMBER 2016
  • DD140 - JUNE 2017
  • DD610 - JUNE 2017
  • DD630 - JUNE 2017
  • DD640 - MARCH 2019
  • DD670 - MARCH 2019
  • DD860 - MARCH 2019

EMC Isilon is a storage platform attached to the scale-out network for high volume storage, backup and archiving of unstructured data. A clustered isilon storage system consists of three or more nodes, with each node being self-contained. OneFS is the operating system software used that unifies a cluster of nodes into a single shared resource. OneFS OS allows the storage system to grow symmetrically or independently, depending on the need for additional space or processing capacity.

Different types of nodes are available for the Isilon storage system. These are the S series that provide high IOPs for process-intensive applications; the X series that will provide high performance; the NL series for high capacity that supports archiving and disaster recovery needs; and the HD series for high density. All four types of nodes mentioned can coexist simultaneously on a single file system in the same cluster. As the Isilon storage system is designed as an expansion platform, it can grow to 144 nodes per cluster and nodes can be added as needed for additional capacity or performance. This provides the ability to provide relatively high storage utilization rates.

Our consultants at DC Parts have a wealth of knowledge with a specific focus on server and storage systems. If you are looking for a practical and viable alternative option for maintenance and support on your current Isilon system, why not contact our team to see if we can help you further.

 

Benefits of maintenance support DC Parts in storage EMC Isilon

  • Technical diagnostics and hardware repair EMC Isilon
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • OneFS
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC Isilon, but not DC Parts

EOSL Dates 'End of Life Useful '

  • X200 - MARCH 2015
  • X400 - JUNE 2015
  • NL400 - MARCH 2016

VMAX 10, 20 and 40K storage systems offer scalability and high availability, while providing transformation functionality for the hybrid cloud. EMC VMAX storage systems have a virtual matrix architecture and are a unique way to create storage systems that go beyond the physical constraints of existing architectures, scaling system resources through building blocks that are EMC VMAX engines.

A single 10, 20 or 40K engine can support the base for high availability in any of its respective storage systems. Each VMAX system contains two VMAX cards and redundant interfaces for the interconnection of the EMC virtual matrix. Each VMAX card consolidates front-end, global memory and back-end functions, allowing direct memory access to data for optimized I / O operations.

To provide scalable performance and high availability, EMC VMAX engines are interconnected by a set of several active fabrics. Additional VMAX engines can be added to provide a linear scale-out of VMAX system resources and provisioned without interruption. The virtual matrix architecture was designed to accommodate several additional mechanisms. These mechanisms can be geographically located anywhere in a data center, allowing an unprecedented level of scalability of infrastructure services with a single point of management.

Our DC Parts consultants have a wealth of knowledge with a specific focus on server and storage systems. If you are looking for a practical and viable alternative option for maintenance and support on your current VMAX system, why not contact our team to see if we can help you further.

 

Benefits of maintenance support DC Parts in storage EMC VMAX

  • Technical diagnostics and hardware repair EMC VMAX
  • Level 3 engineer assigned to an account
  • Global service availability
  • Email alert notification
  • Maintenance support for current and end-of-life EMC equipment
  • Pro-rata support and co-termination dates
  • Consulting
  • Detailed health check reports
  • Site audits / surveys
  • Data / platform migration
  • IT decommissioning
  • Data erasure
  • Highly experienced and specially trained technical field engineers
  • Spare parts kit on site
  • Various methods for generating a fault ticket ticket
  • Helpdesk assistance
  • Operation and configuration assistance
    • SymmWin, Enginuity
    • SymmCli
    • HYPERMAX OS
  • Flexible short and long-term maintenance support contracts
  • Hardware rental
  • DMR (digital media retention)
  • Operational savings

 

End of support EMC VMAX, but not DC Parts

EOSL Dates 'End of Life Useful '

  • VMAX 10K - SEPTEMBER 2017
  • VMAX 20K - MARCH 2020
  • VMAX 40K - SEPTEMBER 2022
  • VMAX 100K - 
  • VMAX 200K - 
  • VMAX 300K - 

Dell EMC Gen 1 VNXe Technical Specifications

Dell-EMC's VNXe series provides a consolidated platform for provisioning, managing and monitoring data storage and is designed for small organizations or for use in remote offices. VNXe has a simplified Unisphere management interface and supports the integration of VMware and Hyper-V servers, providing storage over an IP network.

Gen 1 VNXe ModelMax Qty Drives/Max Raw CapacityProcessorfast cacheEmbedded I/O Slot/SPPort Options for Flex I/O ModuleSupported DrivesDPE OptionsDrive Encl OptionsSupported Protocols
VNXe3100, 2USingle SP: 6-48 Drives; Dual SP: 6-96 Drives/192TB1 or 2 Intel Xeon Dual CoreN/A2 x 1GB4x1GBe300GB 15K, 600GB 15K, 1TB, 2TB12 x 3.5″ Drives or 25 x 2.5″ Drives12 x 3.5″ SAS/NL-SAS; 25 x 2.5″ SAS/FLASHiSCSI, CIFS, NFS
VNXe3150, 2USingel SP: 6-50 Drives; Dual SP: 6 to 100/288TB1 or 2 Intel Xeon Quad Core with 4 or 8 gb cacheN/A2 x 1GB4x1GBe300GB 15K, 600GB 15K, 1TB, 2TB12 x 3.5″ Drives or 25 x 2.5″ Drives12 x 3.5″ SAS/NL-SAS; 25 x 2.5″ SAS/FLASHiSCSI, CIFS, NFS
VNXe3300, 3U6-120/240TB2x Intel Xeon Quad CoreN/A4 x 1GB4x1GBe, 2x10GBe100GB Flash, 300GB 15K, 600GB 15K, 1TB NL-SAS, 2TB NL-SAS15 x 3.5″ Drives; 25 x 2.5″ Drives15 x 3.5″ Flash/SAS/NL-SASiSCSI, CIFS, NFS

 

Dell EMC Gen 2 VNXe Technical Specifications

Dell-EMC's VNXe3200 leverages the MCx architecture, improving performance, scalability, and functionality, and provides unified storage provisioning in a single enclosure. The VNXe3200 is an ideal platform for physical server infrastructures and server virtualization and is ideal for smaller user configurations.

Gen 2 VNXe ModelMax Qty Drives/Max Raw CapacityProcessorfast cacheEmbedded I/O Slot/SPPort Options for Flex I/O ModuleSupported DrivesDPE OptionsDrive Encl OptionsSupported Protocols
VNXe3200, 2U6-50/200TB2 X 2.2gHZ Xeon 4-core/24GB200GB4x Base 10GB/s per Controller8 Gb/s FC 4 portsFLASH: 100GB, 200GB 15K SAS: 300GB, 600GB, 10K AS: 600GB; NL-SAS: 2TB, 4TB12 x 3.5″ Drives; 25 x 2.5″ Drives12 x 3.5″ SAS/NL-SAS; 25 x 2.5″ SAS/FLASHCIFS (SMB1, SMB2, SMB3), NFSv3, iSCSI, FC, NLM, RIP v1-v2, SNMP, NDMP v1-v4, ARP, ICMP, SNTP, LDAP

Dell EMC VNX Gen 1 Technical Specifications

The Dell EMC VNX succeeds the EMC Clariion and Celerra platforms, offering scalable hybrid block and file storage solutions. VNX is designed for mid-tier platforms through enterprise platforms, providing offerings that include file-only, block-only, and unified. VNX offers higher drive density utilizing SAS drive technology, faster processor speeds with more efficient performance than Clariion or Celerra predecessors.

VNX Gen 1 ModelMax Qty Drives/Max Raw CapacityProcessorfast cacheMax Port per ArrayUnified-File Component Max Data MoversUnified-File Component Min/Max Ports on Data MoverMax 6GB/s SAS BusesSupported DrivesDrive Encl OptionsSupported Protocols
VNX5100, 3U4-75/225TB2x Intel Xeon 5600/8gb100GB8N/AN/A2 x 4-laneFLASH: 100GB, 200GB; SAS: 300GB 15K, 600GB 15K, 300GB 10K, 600GB 10K, 900GB 10K; NL-SAS: 1TB, 2TB, 3TB15×3.5″ SAS/FLASh/NL-SAS (3U) and 25×2.5″ SAS/Flash (2U)FC
VNX5300, 3U4 to 125/360TB2x Intel Xeon 5600/16gb500GB241 – 2FC PORTS=4; IP PORTS=8; 1 GBASE T PORTS=8; 10 Gbe PORTS=42 x 4-laneFLASH: 100GB, 200GB; SAS: 300GB 15K, 600GB 15K, 300GB 10K, 600GB 10K, 900GB 10; NL-SAS: 1TB, 2TB, 3TB15×3.5″ SAS/FLASh/NL-SAS (3U) and 25×2.5″ SAS/Flash (2U)CIFS, NFS, pNFS, MPFS, FC, FCoE, iSCSI
VNX5500, 3U4 to 250/720TB2x Intel Xeon 5600/24gb1TB241 – 3FC PORTS=4; IP PORTS=12; 1 GBASE T PORTS=12; 10 Gbe PORTS=62 or 6 x 4-laneFLASH: 100GB, 200GB; SAS: 300GB 15K, 600GB 15K, 300GB 10K, 600GB 10K, 900GB 10; NL-SAS: 1TB, 2TB, 3TB15×3.5″ SAS/FLASh/NL-SAS (3U) and 25×2.5″ SAS/Flash (2U)CIFS, NFS, pNFS, MPFS, FC, FCoE, iSCSI
VNX5700, 2U4 to 500/1,485TB2x Intel Xeon 5600/36gb1.5GB242 – 4FC PORTS=4; IP PORTS=12; 1 GBASE T PORTS=12; 10 Gbe PORTS=64 x 4-laneFLASH: 100GB, 200GB; SAS: 300GB 15K, 600GB 15K, 300GB 10K, 600GB 10K, 900GB 10K; NL-SAS: 1TB, 2TB, 3TB15×3.5″ SAS/FLASh/NL-SAS (3U) and 25×2.5″ SAS/Flash (2U)CIFS, NFS, pNFS, MPFS, FC, FCoE, iSCSI
VNX7500, 2U4 to 1000/2,970TB2 x Intel Xeon 5600/48 or 96gb2.1TB322 – 8FC PORTS=4; IP PORTS=16; 1 GBASE T PORTS=16; 10 Gbe PORTS=84 or 8 x 4-laneFLASH: 100GB, 200GB; SAS: 300GB 15K, 600GB 15K, 300GB 10K, 600GB 10K, 900GB 10K; NL-SAS: 1TB, 2TB, 3TB15×3.5″ SAS/FLASh/NL-SAS (3U) and 25×2.5″ SAS/Flash (2U)CIFS, NFS, pNFS, MPFS, FC, FCoE, iSCSI

Dell EMC VNX2 Technical Specifications

The 2-generation VNX series offers streamlined, efficient and cost-effective storage solutions for FC, FCoE, 1Gbe and 10Gbe connectivity with model capacities ranging from lower mid-tier to enterprise series. Some of the VNX2 series enhancements include improvements to the Unisphere management software framework, next-generation PCIe I/O modules, a larger selection of SAS drives, and high-density DAEs, all of which provide significant performance advancements.

ModelMax Qty Drives - Max CapacityProcessorfast cacheMax Port per ArrayUnified-File Component Max Data MoversUnified-File Component Max Ports on Data MoverMax 6GB/s SAS BusesSupported DrivesDrive Encl OptionsSupported Protocols
VNX5200125 / 500TB2X Intel Xeon E5-2600 4-Core 1.2GHz/32GB600GB2843833FC PORTS=4; IP PORTS=8; 1GBASET=8; 10 Gbe=42 x 4-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U)ISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP
VNX5400250 / 1,000TB2X Intel Xeon E5-2600 4-Core 1.8GHz/32GB1TB3643834FC PORTS=4; IP PORTS=8; 1GBASET=8; 10 Gbe=42 x 4-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U) 60 x 3.5″ (4U), 120X2.5″ (3U) FLASH, SAS, NL-SASISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP
VNX5600500 / 2,000TB2X Intel Xeon E5-2600 4-Core 2.4GHz/48GB2TB4443834FC PORTS=4; IP PORTS=8; 1GBASET=8; 10 Gbe=46 x 4-lane or 2 x 4-lane + 2 x 8-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U) 60 x 3.5″ (4U), 120X2.5″ (3U) FLASH, SAS, NL-SASISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP
VNX5800750 / 3,000TB2X Intel Xeon E5-2600 6-Core 2.0GHz/64GB3TB4443867FC PORTS=4; IP PORTS=12; 1GBASET=12; 10 Gbe=66 x 4-lane or 2 x 4-lane + 2 x 8-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U) 60 x 3.5″ (4U), 120X2.5″ (3U) FLASH, SAS, NL-SASISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP
VNX76001000 / 4,000TB2X Intel Xeon E5-2600 8-Core 2.2GHz/128GB4.2TB4443869FC PORTS=4; IP PORTS=12; 1GBASET=12; 10 Gbe=66 x 4-lane or 2 x 4-lane + 2 x 8-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U) 60 x 3.5″ (4U), 120X2.5″ (3U) FLASH, SAS, NL-SASISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP
VNX80001500 / 6000TB2X Intel Xeon E5-2600 8-Core 2.7GHz/256GB4.8TB8843869FC PORTS=4; IP PORTS=16; 1GBASET=16; 10 Gbe=816 x 4-lane or 8 x 8-laneFLASH: 100GB, 200GB, 400GB, 800GB, 1.6TB, 3.2TB; FAST CACHE: 100GB, 200GB, 400GB; 10K SAS: 600GB, 900GB, 1.2TB; 15K SAS: 300GB, 600GB; NL-SAS: 1TB, 2TB, 3TB, 4TB15 X 3/5″ SAS/FLASH/NL-SAS (3U); 25 X 2.5″ SAS/FLASH (2U) 60 x 3.5″ (4U), 120X2.5″ (3U) FLASH, SAS, NL-SASISCSI; FCP; FCoE; NFSv2, v3, v4, v4.1 WITH Pnfs; CIFS (SMB1, 2 and 3); FTP

 

Dell EMC VMAX Technical Specifications

The Dell EMC VMAX series offers Tier 1 multi-controller architecture for business-critical enterprise environments and also offers FICON connection alternatives for “Z” connectivity. The VMAX 10K, VMAX 20K, and VMAX 40K series scale from 4 to 8 Engines with support for up to 3.200 drives and are built with EMC Virtual Matrix Architecture.

Dell EMC's VMAX3 Tier-1 storage systems are designed for mission-critical environments that require high-availability, high-capacity storage. Some of the VMAX3 enhancements include an added second management control module, hybrid storage options, all flash and block configurations added, and file support via embedded NAS.

VMAX ModelNo of Enginesper engine cacheSupported DrivesDrive Types SupportedHost Protocols SupportedHost Type Supported
10K4 – 10K ENGINES each with 2 x 6-core 2.8GHz Xeon Processors128GB for max of 512GB15602.5” SAS: 300GB 10K, 300GB 15, 600GB 10, 900GB 10K, 1TB 7.2K, 100GB SSD, 200GB SSD, 400GB SSD; 3.5” SAS: 300GB 10K, 600GB 10K, 900GB 10K, 2TB 7.2K, 3TB 7.2K, 4TB 7.2K; 100GB SSD, 200GB SSD, 400GB SSD; FC: 300GB 15K, 450GB 15K, 600GB 15KFC, iSCSI, FCoE,8-64 FC; 4-32 16GB FC, 4-32GBE, 4-32 10GBE, 4-32 10GB FCoE
20K8 – 20K ENGINES each with 4 x 4 core 2.33GHz Xeon Processors128GB for max of 1,024GB32003.5": FC: 146 15K, 300 10K, 300 15K, 450 10K, 450 15K, 600 10K, 600 15K, 1TB SATA, 2TB SATA, 3TB SATA. 2.5” SAS: 300GB 10, 450GB 10K, 600GB 10K. SAS SSD: 100GB, 200GB 400GB,FC, FICON, FCoE, iSCSI4-128 FC; 4-64 4 OR 8GB FICON; 4-64 FCoE, 1GE AND 10GBE iSCSI
40K8 – 40K ENGINES each with 4 x 6 core 2.8GHz Xeon processors256GB for a max of 2,048GB2400 – 32003.5": FC: 300 10K, 300 15K, 600 10K, 600 15K, 2TB SATA. 2.5” SAS: 300GB 10, 600GB 10K. 3.5” SAS SSD: 100GB, 200GB 400GB; SAS SSD 2.5”: 200GB, 400GBFC, FICON, FCoE, iSCSI4-128 FC; 4-64 4 OR 8GB FICON; 4-64 FCoE, 1GE AND 10GBE iSCSI
100K1-2 each with 24x6 core Xeon E5-2620-V2 2.1GHz processors512 or 1024GB options for a total to max of 2TB14403.5”SAS: 600GB 10K; 300GB 15; 4TB 7.2K; 800GB AND 1.6TB FLASH. 2.5"SAS: 600GB 10K, 1.2TB 10K, 300GB 15K, 800GB, 1.6TB, 960, 1.92TB SSD FlashFC, FICON, FCoE, iSCSI,4x8GB OR 16GB FC, SRDF; 4X16GB FICON; 4x10GBe FCoE; 4x10GB and iSCSI; For SDRF: Gbe 2/2 Opt/Cu; 10GBe: 2 x 10GBe
200K1-4 each with 8x32 core Xeon E5-2650-v2 2.6GHz processors512, 1024, or 2048GB options for a max of 8TB28803.5”SAS: 600GB 10K; 300GB 15; 4TB 7.2K; 800GB AND 1.6TB FLASH. 2.5"SAS: 600GB 10K, 1.2TB 10K, 300GB 15K, 800GB, 1.6TB, 960, 1.92TB SSD FlashFC, FICON, FCoE, iSCSI,4x8GB OR 16GB FC, SRDF; 4X16GB FICON; 4x10GBe FCoE; 4x10GB and iSCSI; For SDRF: Gbe 2/2 Opt/Cu; 10GBe: 2 x 10GBe
400K1-8 each with 12x48 core Xeon E5-2697-v2 2.7GHz processors512, 1024, or 2048GB options for a max of 16TB57603.5”SAS: 600GB 10K; 300GB 15; 4TB 7.2K; 800GB AND 1.6TB FLASH. 2.5"SAS: 600GB 10K, 1.2TB 10K, 300GB 15K, 800GB, 1.6TB, 960, 1.92TB SSD FlashFC, FICON, FCoE, iSCSI,4x8GB OR 16GB FC, SRDF; 4X16GB FICON; 4x10GBe FCoE; 4x10GB and iSCSI; For SDRF: Gbe 2/2 Opt/Cu; 10GBe: 2 x 10GBe

The DELL EMC family offers the latest in scalable Tier-I multi-controller architecture with unmatched consolidation and efficiency for the enterprise. With completely redesigned hardware and software, the VMAX 100K, 200K and 400K arrays deliver unprecedented performance and scale. Ranging from a single-engine VMAX 100K to an 400-engine VMAX 8K, these enhanced arrays deliver dramatic increases in floor density, consolidating high-capacity disk enclosures for 2,5″ and 3,5″ drives and engines on the same system compartment. VMAX 100K, 200K and 400K arrays can be configured as hybrid or all flash configurations. In addition, the innovative VMAX3 Hypervisor enables the VMAX3 family to provide unified block and file support through Embedded NAS (eNAS), eliminating the need for corresponding physical hardware. Embedded Management is also available, eliminating the need for the customer to allocate and manage an external server to run Unisphere for VMAX. Data-at-rest encryption is available on all VMAX3 models for applications that require the highest level of security on a Tier 1 converged platform. FAST.X TM extends the data services capabilities of the VMAX3 with data provisioning. SLO for external arrays including XtremIO, Cloud Array and other 3rd party systems. VMAX3 now offers an even wider range of options for the data center with the addition of FICON support for our Mainframe customers, along with support for Fiber Channel, iSCSI and FCoE Front End protocols. In addition, the VMAXXNUMX has received VASA Provider certification from VMware to support VVol storage.
This revolutionary VMAX3 architecture delivers virtual array bandwidth of 175GB/sec per engine and up to 1.400GB/sec on an eight-engine VMAX3 array. All VMAX3 models come fully pre-configured from the factory to significantly reduce time to first I/O.

Specifications

UNMATCHED ARCHITECTURE

The Dynamic Virtual Matrix Architecture enables IT departments to build storage systems that transcend the physical constraints of competing array architectures. This architecture allows for scaling of system resources through common, fully redundant building blocks called VMAX3 engines. VMAX3 engines provide the complete foundation for high availability storage arrays. Each engine contains two VMAX directors and redundant interfaces for the Dynamic Virtual Matrix InfiniBand@ dual fabric interconnect. Each director consolidates front-end, global memory, and back-end functions, allowing direct in-memory access to data for optimized I/O operations. Depending on the array chosen, up to eight (8) VMAX3 engines can be interconnected through a set of active fabrics that provide scalable performance and high availability. The revolutionary VMAX3 Hypervisor provides the framework for currently supported and future embedded applications.
VMAX3 arrays support the use of native 2,5Gb/s 6″ SAS drives, 3,5″ drives, or a combination of both drive types in the array. Individual system bays can house one or two engines and up to a maximum per engine of 6 high-density disk array enclosures (DAEs) available in 3,5″ (60 slots) or 2,5″ (120 slots) form factor ). As a result, each system bay can support up to 720 2,5″ drives or up to 360 3,5″ drives, or a mix of the two. In addition, all new arrays support system bay spread up to 25 meters from the first system bay. All family members also support third-party storage. Detailed specifications and a comparison of the three VMAX3 arrays follow.

SPECIFICATION SHEET
VMAX3 FAMILY SPECIFICATIONS

COMPONENTSVMAX 100KVMAX 200KVMAX 400K
ENGINE
Number of Engines supportedI to 21 4 toI to 8
Engine Enclosure4u
CPUIntel Xeon E5-2620 v2 2.1 GHz 6 coreIntel Xeon E5-2650-v2 2.6GHz 8 coreIntel Xeon E5-2697-v2 2.7GHz 12 core
Dynamic Virtual Matrix BW700GB / s700GB / s1400GB / s
# Cores per CPU/per Engine/per System6/24/488/32/12812/48/384
Dynamic Virtual Matrix InterconnectInfiniBand Dual Redundant Fabric: 56Gbps per portInfiniBand Dual Redundant Fabric: 56Gbps per portInfiniBand Dual Redundant Fabric: 56Gbps per port
CACHE
Cache-System Min (raw)512GB512GB512GB
Cache-System Max (raw)2TBr (with 1024GB engine)8TBr (with 2048GB engine)16TBr (with 2048GB engine)
Cache-per Engine Options512GB, 1024GB512GB, 1024GB, 2048GB512GB, 1024GB, 2048GB
XtremCache SupportYesYes
VAULT
Vault StrategyVault to FlashVault to FlashVault to Flash
Vault Implementation2 to 4 Flash SLICs / Engine2 to 8 Flash SLICs / Engine2 to 8 Flash SLICs / Engine
FRONT END 1/0 MODULES
Maximum Front-End I/O Modules/engine888
Front-End I/O Modules and Protocols SupportedFC: 4 x 8Gbs (FC, SRDF) FC: 4 x 16Gbs (FC, SRDF) FICON: 4 x 16Gbs (FICON) FCoE: 4 x IOGbE (FCoE) iSCS1: 4x10GbE (iSCS1) GbE: 2/2 Opt/Cu (SRDF) 10GbE: 2 x IOGbE (SRDF)FC: 4 x 8Gbs (FC, SRDF) FC: 4 x 16Gbs (FC, SRDF) FICON: 4 x 16Gbs (FICON) FCoE: 4 x IOGbE (FCoE) iSCS1: 4 x1OGbE (iSCS2) GbE: 2/2 opt/ Cu (SRDF) IOGbE: XNUMX x IOGbE (SRDF)FC: 4 x 8Gbs (FC, SRDF) FC: 4 x 16Gbs CFC, SRDF) FICON: 4 x 16Gbs (FICON) FCOE: 4 x IOGbE (FCoE) iSCS1: 4 x1OGbE (iSCS2) GbE: 2/10 Opt/CLl (SRDF) 2GbE: XNUMX x IOGbE (SRDF)
eNAS 1/0 MODULES
Max eNAS I/O Modules/ Data Mover Software2 (min of 1 Ethernet I/O module required)3 (min of 1 Ethernet I/O module required)3 (min of 1 Ethernet I/O module required)
eNAS 1/0 Modules SupportedGbE: 4 x IGbE cu IOGbE: 2 x 10GbE cu IOGbE: 2 x IOGbE opt FC: 4 x 8Gbs (NDMP Back-up) (max 1 FC NDMP/Software Data Mover)GbE: 4 x IGbE cu IOGbE: 2 x IOGbE cu IOGbE: 2 x IOGbE opt FC: 4 x 8Gbs (NDMP Back-up) (max 1 FC NDMP/Software Data Mover)GbE: 4 x IGbE cu IOGbE: 2 x IOGbE cu IOGbE: 2 x IOGbE opt FC: 4 x 8Gbs (NDMP Back-up) (max 1 FC NDMP/Software Data Mover)
eNAS SOFTWARE DATA MOVERS
Max Software DataMovers2 (1 Active + 1 Standby)4 (3 Active and 1 Standby)8 (7 Active and 1 Standby)
Max NAS Capacity/Array (Terabytes Usable)25615363584
COMPONENTSVMAX 100KVMAX 200KVMAX 400K
CAPACITY, DRIVES
Max Capacity per Array500TBu2.34PB4.4 IPBLJ
Max Drives per System144028805760
Max Drives per System Bay720720720
Min Spares per System111
Min Drive Count (1 engine)1 spare4 + 1 spare4 + 1 spare
DRIVES
3.5" SAS Drives3.5" Drives:3.5" Drives:3.5" Drives:
10K RPM SAS600GB 10K RPM1.2TBt11600GB, C11 1.2TB[11 10K RPM600GB, C11 1.2TB[11 10K RPM
15K RPM SAS15K RPM15K RPM300G3C1] 15K RPM
7.2K RPM SAS7.2K RPM7.2K RPM7.2K RPM
7.2K RPM SAS4TB[1] 7.2K RPM4TB[1] 7.2K RPM4TB[1] 7.2K RPM
Flash SAS200GB 800GBFlash200GB 800GBFlash200GB 800GB 1 Flash
2.5" SAS Drives2.5" Drives:2.5" Drives:2.5" Drives:
10K RPM SAS300GB,1[1]) 600GB, t21 1.2TB[21 10K RPM1.2TBC21 10K RPM600GB, C21 1.2TB[21 10k RPM
15K RPM SAS300GB11] 15K RPM300GB[ 1] 15K RPM300GBC1] 15K RPM
Flash SAS200GB 400GB 800GB 1.6TB[1JC[2]1 Flash200GB 400GB 800GB 1.6TB[1JL31 Flash200GB 800GB Flash
Flash SAS960GB 1.92TBL21[31 Flash960GB 1.92TBC21C3) Flash960GB 1.92TBC21[3) Flash
BE Interface6Gbps SAS6Gbps SAS6Gbps SAS
RAID OptionsRAID 1 All drivesRAID 1 All drivesRAID 1 All drives
RAID 5 (3 RAID 5 (7+1) All drivesRAID 5 (3+1) RAID 5 (7+1) All drivesRAID 5 (3+1) RAID 5 (7+1) All drives
RAID 6 (6 RAID 6 (14+2) All drivesRAID 6 (6+2) RAID 6 (14+2) All DrivesRAID 6 (6+2) RAID 6 (14+2) All Drives
iCapacity points and drive formats available for upgrades
COMPONENTSVMAX 100KVMAX 200KVMAX 400K
SYSTEM CONFIGURATION TYPES
All 2.5" DAE Configurations2 Bays 1440 Drives4 Bays 2880Drives8 Bays 5760 Drives
All 3.5" DAE Configurations2 Bays 720 Drives4 Bays 1440Drives8 Bays 2880 Drives
Mixed Configurations2 Bays 1320 Drives4 Bays 2640Drives8 Bays 5280 Drives
DISK ARRAY ENCLOSURES
120 x 2.5" DAE DriveYesYesYes
60 x 3.5" DAE DriveYesYesYes
CABINET CONFIGURATIONS
Standard 19" baysYesYesYes
Single Bay System ConfigurationYesYesYes
Dual Engine System Bay ConfigurationYesYesYes
Third Party Rack Mount OptionYesYesYes
DISPERSION
System Bay DispersionUp to 82 feet (25m) between System Bay 1 and System Bay 2Up to 82 feet (25 m) between System Bay 1 and any other System BayUp to 82 feet (25 ) between System Bay 1 and any other System Bay
PRE-CONFIGURATION
1000/0 Virtually ProvisionedYesYesYes
Preconfigured in the FactoryYesYesYes
HOST SUPPORT
Open SystemsYesYesYes
Mainframe (CKD 3380 and 3390 emulation)YesYesYes
IBM i Series Support (0910 only)YesYes
HARDWARE COMPRESSION SUPPORT OPTION (SRDF)
GbE/10 GbEYesYesYes
8Gb/s FCYesYesYes
16Gb/s FCYesYes
POWER OPTIONS
PowerSingle or Three Phase Delta or WyeSingle or Three Phase Delta or WyeSingle or Three Phase Delta or Wye
[1] Capacity points and drive formats available on new systems and upgrades
[2] Mixing of 200GB, 400GB, 800GB, or 1.6TB Flash capacities with 960GB, or 1.92TB Flash capacities on the same array is not currently supported.

VMAX3 FAMILY CONNECTIVITY

1/0 PROTOCOLSVMAX 100KVMAX 200KVMAX 400K
8 Gb/s FC Host/SRDF Ports
maximum/engine323232
maximum/array64128256
16 Gb/s FC Host/SRDF Ports
maximum/engine323232
maximum/array64128256
16 Gb/s FICON Ports
maximum/engine323232
maximum/array64128256
10GbE iSCS1 Ports
maximum/engine323232
maximum/array64128256
10GbE FCoE Ports
maximum/engine323232
maximum/array64128256
10GbE SRDF Ports
maximum/engine161616
maximum/array32128
GbE SRDF Ports
maximum/engine323232
maximum/array64128256
EMBEDDED IN PORTS
GbE Ports
Max ports/Software Data Mover81212
Maximum ports/array164896
10 GbE (Cu or Optical) Ports
Max ports/Software Data Mover466
Maximum ports/array82448
8 Gb/s FC NDMP Back-up Ports
Max ports/Software Data Mover11
Maximum ports/array248

SYSTEM BAY DISPERSION

System Bay Dispersion allows customers to separate any individual or contiguous group of system bays within a distance of up to 82 feet (25 meters) from System Bay 1. This provides unmatched data center flexibility to resolve floor load restrictions or bypass obstacles. which can totally prevent contiguous configurations.

DISK SUPPORT

The VMAX 100K, 200K, and 400K are compatible with the latest 6Gb/s dual-port native SAS drives. All drive families (Enterprise Flash, 10K, 15K, and 7,2K RPM) support two independent I/O channels with automatic failover and fault isolation. Consult your EMC sales representative for the latest list of supported drives and types. Configurations with mixed drive capacities and speeds are allowed depending on the configuration. All capacities are based on 1 GB — 1.000.000.000 bytes. Actual usable capacity may vary depending on configuration.

PLATFORM SUPPORTVMAX 100K, 200K,400K
NOMINAL CAPACITY8001113_1211311920[2)[3)300[1:30012160012]120012)
SPEED (RPM)FlashFlashFlashFlashFlashFlash15K10 k10K10K
AVERAGE SEEK TIME (READ/WRITE MS)N/AN/AN/AN/AN/A2.8/3.33.7/4.23.7/4.23.7/4.2
RAW CAPACITY (GB)20040096016001920292.6292.6585.41200.2
OPEN SYSTEMS FORMATTED CAPACITY196.9393.8787.6939.41578.81880.1288.1288.1576.31181.7
MAINFRAME FORMATTED CAPACITY191.2332.3764.7939.31549.71879.7279.8279.8559.51147.2
PLATFORM SUPPORTVMAX 100K, 200K, 400K
NOMINAL CAPACITY (GB)[3180011B3001113001±1 600111120011)
SPEED RPMFlashFlashFlashFlash15K10K10K10K7.2K7.2K
AVERAGE SEEK TIME (READ/WRITE MS)N/AN/AN/AN/A2.8/3.33.7/4.23.7/4.23.7/4.28.2/9.28.2/9.2
RAW CAPACITY GB2004008001600292.6292.6585.41200.21912.14000
OPEN SYSTEMS FORMATTED CAPACITY (GB)196.9393.8787.61578.8288.1288.1576.31181.71882.73939.2
MAINFRAME FORMATTED CAPACITY191.2382.37 64.71549.7279.8279.8559.51147.21827.73824.0

ENERGY CONSUMPTION AND HEAT DISPOSAL AT AMBIENT ENTRY TEMPERATURES

COMPONENTSVMAX 100KVMAX 200KVMAX 400K
Power dissipation at temperatures 350 C will be higher based on adaptive coolingMaximum Total power consumptionMaximum heat dissipation (Btu/Hr)Maximum Total Power Consumption (kVA)Maximum Heat dissipation (Btu/Hr)Maximum Total power consumptionMaximum heat dissipation (Btu/Ht)
SYSTEM BAV 1, SINGLE ENGINE10.835,73110.936,39811.136,936
SYSTEM BAY 2, SINGLE ENGINE-[1]10.434,59510.635,26210.735,65
SYSTEM BAV 1, DUAL ENGINE8.828,71530,0489.430,975
SYSTEM BAY 2, DUAL ENGINE-IN/AN/A8.828,9129.029,688
COMPONENTSHEIGHT (IN/CM)WIDTH (IN/CM)DEPTH (IN/CM)WEIGHT (MAXIMUM LBS/KG)
SYSTEM BAY, SINGLE ENGINE75/19024/6147/1192065/937
SYSTEM BAY, DUAL ENGINE75/19024/6147/1191860/844

Dell EMC XtremIO 4.0 Technical Specifications

The Dell EMC XtremIO All-Flash Array provides a solution for customers who need scalable, performance-oriented flash storage. Each X-Brick in a clustered configuration increases system IOPs by 150.000 IOPS. Bricks are controlled by a standalone Linux-based server that can manage multiple clusters and support FC and iSCSI protocols. XtremIO is designed for random I/O with consistent performance over time, data patterns and system conditions.

Model 4.0 Gen 1Raw CapacityCapacityUseable CapacityNo of SSDsN-Way Active ControllersSSD EnclosuresInfiniband SwitchesIOPs 70/30 r/w ratioHost Ports
Starter Brick X-Brick5.2TB21.5TB3.6TB13 – 252101500004 X 8GB FC; 4 X 10GBE
1 X Brick (10TB)10TB50TB8.33TB252101500004 X 8GB FC; 4 X 10GBE
1 X Brick (20TB)20TB100.2TB16.7TB25
1 X Brick (40TB)40TB201.6TB33.6TB25
2 X Brick (10TB)20TB100TB16.7TB504223000008 X 8GB FC; 8 X 10GBE
2 X-Brick (20TB)40TB200.4TB33.6TB
2 X-Brick (40TB)80TB403.1TB67.3TB
4 X-Brick (10TB)40TB200TB33.3TB10084260000016 x 8GB FC, 16 x 10GBE
4 X-Brick (20TB)80TB400.8TB66.7TB
4 X-Brick (40TB)160TB806.2TB134.4TB
6 X-Brick (20TB)120TB600TB100TB150126290000024X 8GB FC, 24X 10GBE
6 X-Brick (40TB)240TB1209TB201.5TB
8 X-Brick (20TB)160TB800TB133.3TB2001682120000032X 8GB FC, 32X 10GBE
8 X-Brick (40TB)320TB1612TB268.7TB

 

Dell Isilon Technical Specifications

The Dell EMC Isilon series is based on modular scale-out NAS systems using node/cluster architecture and OneFS O/S, which support most major networking protocols. The series range covers critical performance with the entire Flash series offering up to 250.000 IOPS, the S series offering high performance and massive scalability, the NL series offering management of large amounts of unstructured data, and the NL, HD and X series offering a combination of NL SAS and SSDs.

ModelDrivesQty Drives SupportedCapacity per ChassisNodes/Chassis; Max No Chassis per ClusterForm FactorNetworking
F8001.6TB, 3.2TB or 15.4TB SSD15/node; 60/chassis96TB, 192 or 924TB4 nodes/1 to 364U2 x 10 Gbe (SFP+ or twin-ax copper or 2 x 40 GBE (QSFP+)
A2002TB, 4TB, 8TB SATA; Cache/Node=1 or 2 400GB SSD15/node; 60/chassis120-480TB4 nodes/1 to 364U2x10GbE (SFP)
A200010TB SATA20/node; 80/chassis800TB4 nodes/1 to 364U2x10GbE (SFP)
HD4006TB, 8TB SATA; 59 SATA + 1GB SSD60/node; 60/chassis354TB/472TB1 nodes/3 to 1444U10 Gbps, 1 Gbps and 100 Mbps network connectivity
H4002TB, 4TB, 8TB SATA; Cache/Node: 1 or 2 800GB/1.6TB or 3.2TB SSD15/node; 60/chassis120TB, 240TB, 480TB4 nodes/1 to 364U2 x 10GbE (SFP+)
H5002TB, 4TB, 8TB15/node; 60/chassis120TB, 240TB, 480TB4 nodes/1 to 364U2 x 10 GE (SFP+ or twin-ax copper) or 2 x 40 GBE (QSFP+)
H600600GB 10K SAS; 1.2TB 10K SAS; Cache/Node: 1 or 2 1.6TB or 3.2TB SSD15/node; 60/chassis72TB or 144TB4 nodes/4U2 x 10GE (SFP+ or twin-ax copper) or 2 x 40 GbE (QSFP+)
NL4101TB, 2TB, 3TB, 4TB, 6TB, 8TB Cache/Node 1 SSD 200GB, 400GB, 800GB, 1.6TB35/node; 35/chassis105TB-40.3PB1 node/1 to 1444U2 x 1 GE and 2 x 10 GE (SFP+ or twin-ax copper)
NL4001TB, 2TB, 3TB, 4TB, 8TB35/node; 35/chassis105TB-40.3PB1 node/1 to 1444U2 x 1 GE and 2 x 10 GE (SFP+ or twin-ax copper)
X2101TB, 2TB, 3TB SATA and up to 6 SSD 200Gb, 400GB or 800GB12/node18TB to 6.9PB1 node/3 to 1442U4 x GBE (twin-ax copper) or 4 x GE or 2 x GE + 2 x 10GE (SFP+ or Twin-ax Copper)
X4101TB, 2TB, 3TB, 4TB and up to 6 SSDs 200GB, 400GB 800GB36/node108TB to 20.7PB1 node/3 to 1444U2 x 1 GE and 2 x 10 GE (SFP+ or twin-ax copper) or 2 x 1 GE and 2 x 40 GE (QSFP+) (Requires OneFS 8.0.0.1 or higher
X2001TB, 2TB, 3TB and up to 6 SSDs 200GB, 400GB 800GB12/node82GB-6.9PB1 node/3 to 1442U2 x 1 GE and 2 x 10 GE (SFP+ or twin-ax copper)
X4001TB, 2TB, 3TB and up to 6 SSDs 200GB, 400GB 800GB36/node97.2TB-20.7PB1 node/3 to 1444U4 x GBE or 2 x GE + 2 x 10 GE (SFP+ or Twin-ax copper)
S210300GB 10K, 600GB 10K, 900GB 10K, 1.2TB 10K; 0-6 400GB, 800GB or 1.6TB SSDs24/node16.2TB to 4.15PB1 node/3 to 1442U2 copper 1000 Base-T (GE) and 2 x 10GE (SFP+ or twin-ax copper) or 2 x 1 Gigabit Ethernet and 2 x 40GbE (QSFP+) (Requires OneFS 8.0.0.1 or higher)
S200300GB 10K, 600GB 10K, 900GB 10K, 1.2TB 10K 0-6 SSDs 200GB, 400GB or 800GB24/node16.2TB to 4.15PB1 node/3 to 1442U4 copper 1000 Base-T (GE) or 4 x GE (copper) or 2 x GE and 2 x 10GE (SFP+ or twin-ax copper)

Some segments we help

 

 

Third party maintenance for Dell EMC End-of-Life (EOL) and Dell EMC End-of-Service-Life (EOSL)

 

DC Parts has analyzed over 250 customer contracts that use storage, servers, networks and tapes and has shown us that 40-60% of these customers' equipment operate beyond the end of support dates.

There are simple options for you to reduce your expenses and feel more confident about your IT infrastructure. When you choose a custom solution to extend the life of your IT equipment, you can make big savings.

 

How can we help:

 

We offer our customers a unique assessment to help analyze their OEM support contracts and understand exactly which devices can be supported in the most affordable way, without compromising their performance. 

 

Dell EMC storage support

 

Support services DC Parts provide support for Fujitsu storage. We work to combine the right approach with the right equipment to meet your Fujitsu storage support needs.

With a highly trained team of certified engineers ready to solve any Fujitsu storage problems, DC Parts is uniquely qualified to serve you. If you would like to speak with a representative about purchasing Fujitsu storage media, contact us.

A DC Parts helps companies of all sizes better manage their IT data centers by providing third-party support services to leading equipment manufacturers, including: IBM, HPE, netappOracle Sun, Cisco and more.

With support DC Parts, we manage all your hardware, between the manufacturers through a link. Support from DC Parts offers flexible service level agreements, certified engineering support teams and dedicated customer service representatives.

Extend the life and value of your IT assets. Contact DC Parts Support immediately.

Request a quote with us and answer any questions you may have.






    Contact DC Parts and we will help you better optimize your IT investment

    Questions?

    Call our product team at + 011 3022 4694-XNUMX or send an email to: contato@dcparts.com.br

    Warranty:

    All the hardware in the DC Parts is fully tested, guaranteed and comes with a 90-day warranty. Above all, we will support you before, during and after the sale.

     

    Product Experience:

    In other words, we know how important it is that you get the product you need to meet your needs. In addition, we have a sales team and agile logistics to make the shortest possible delivery times possible.

    Sending proposals within 2 business days, experimenting with data centers and integrators and we always try to get the best price.

    Do not hesitate to contact us with any questions about compatibility, condition or the best product available for your need.

    Our website represents only a fraction of the pieces we work with. But you can contact your complete requirements or equipment lists.

    Select currency
    PENSalt