Saturday, December 7, 2013

Solid Performance, with a Solid Name: Welcome to Solidfire.

I had the chance to meet with the CEO and Founder of SolidFire Dave Wright while at VMworld this year. Who is SolidFire? Well, I will overview that and share a interview I had with Dave shortly.

Solid fire is a all-SSD Architecture platform, that scales and uses raid-less data protection. But if you ask me, what set's them apart on the SSD arena; I will answer with the QOS.


















"Quality of service is not a feature. It is a architecture." They go in to state it is " the only way to deliver guaranteed storage performance in public and private cloud infrastructure"  source
http://solidfire.com/technology/qos-benchmark-architecture/

The ability to set levels of service is critical in the service provider space. Min, Max and Burst IOPS per volume for example.













QOS Demo Video



As you can see below, the system can really push out the IO the numbers.



As you can see, fairly straight forward on the hardware side, however, I would recommend dedicated switching.




Source http://solidfire.com/technology/solidfire-storage-system/

What else? Add features like a array solutions,  VMware , openstack, to name a couple and multitenancy reporting but I won't touch on this today.

I'll wrap up with a interview of me personally interviewing Dave, Owner and founder of SolidFire.






My Thoughts?

Need a top tier, with extreme performance? Need QOS? Need multitenancy? There are a Solid reasons to consider SolidFire when looking at Storage. Now if I could get a unit in my lab...


Thanks


Roger Lund

ystem Examples*

Cluster Size 5 Nodes 20 Nodes 40 Nodes 100 Nodes
Effective Capacity** 60TB with SF3010
108TB with SF6010
173TB with SF9010
240TB with SF3010
432TB with SF6010
692TB with SF9010
480TB with SF3010
864TB with SF6010
1.4PB with SF9010
1.2PB with SF3010
2.1PB with SF6010
3.4PB with SF9010
4K Random IOPS 250,000 with SF3010/SF6010
375,000 with SF9010
1M with SF3010/SF6010
1.5M with SF9010
2M with SF3010/SF6010
3M with SF9010
5M with SF3010/SF6010
7.5M with SF9010
kW at max IO Load 1.5 kW with SF3010/SF6010
2.2 kW with SF9010
6 kW with SF3010/SF6010
9 kW with SF9010
12kW with SF3010/SF6010
18 kW with SF9010
30kW with SF3010/SF6010
45 kW with SF9010
Rack Units 5RU 20RU (half rack) 40RU (full rack) 100RU
- See more at: http://solidfire.com/technology/solidfire-storage-system/#sthash.ccyH5XwR.dpuf

Fine-grain, per-volume settings

SolidFire's QoS functionality lets cloud providers set and control the fine-grain performance levels for every volume and guarantee application performance with firm SLAs.
  • Min IOPS - The minimum number of I/O operations per-second that are always available to the volume, ensuring a guaranteed level of performance even in failure conditions.
  • Max IOPS - The maximum number of sustained I/O operations per-second that a volume can process over an extended period of time.
  • Burst IOPS - The maximum number of I/O operations per-second that a volume will be allowed to process during a spike in demand, particularly effective for data migration, large file transfers, database checkpoints, and other uneven latency sensitive worklo
- See more at: http://solidfire.com/technology//solidfire-element-os/guaranteed-qos/#sthash.rIuGPn5K.dpuf

Quality of Service is not a feature.

It is an architecture

- See more at: http://solidfire.com/technology/qos-benchmark-architecture/#sthash.aTLD0AKs.dpuf

Quality of Service is not a feature.

It is an architecture.

- See more at: http://solidfire.com/technology/qos-benchmark-architecture/#sthash.aTLD0AKs.dpuf

Quality of Service is not a feature.

It is an architecture.

- See more at: http://solidfire.com/technology/qos-benchmark-architecture/#sthash.aTLD0AKs.dpuf

Quality of Service is not a feature.

It is an architecture.

- See more at: http://solidfire.com/technology/qos-benchmark-architecture/#sthash.aTLD0AKs.dpuf

Friday, October 25, 2013

EMC Elect 2014 Judge

I am proud to announce on vBrainstorm.com  and vTech411.com that I was selected to be one of the judges that will choose the 2014 EMC Elect!  I was selected as EMC Elect 2013 which was the inaugural year of the program.  Some of you may be asking what the EMC Elect is.  Head on over to this site to answer that question:  https://community.emc.com/community/connect/emc_elect

I was chosen as an EMC Elect due to my community contributions to Twitter, blogs and EMC Community Network (ECN) participation.  We will be contacting all of those nominated soon and start the judging process thereafter.  Those selected as 2014 EMC Elect will be notified in January.

I am excited for this opportunity to give back to the EMC community.  I have never been a judge for anything so it will be a learning process.  It will be tough narrowing the nominations down but anything challenging is always rewarding.  If you have been nominated, I wish you good luck!  I cannot wait to help pick the next round of EMC Elect champions!

Tuesday, September 24, 2013

Clearing the ARP cache table in ESXi 5.5

A new feature has been added in the new 5.5 release of ESXi.  You can now clear the ARP cache on an ESXi server.  This is a very useful feature that should have always been part of ESXi.  This feature ONLY works with ESXi 5.5.  The KB article contents are below and were copied from http://kb.vmware.com/kb/2049667

Clearing the ARP cache table in ESXi 5.5 (2049667)

Purpose

This article provides information on the new esxcli command introduced in vSphere 5.5 to clear the ARP table. ESXi 3.x, 4.x, and ESXi 5.0/5.1 do not include any mechanism to clear the ARP table.

For more information, see Troubleshooting network connection issues using Address Resolution Protocol (ARP) (1008184).

Resolution

vSphere 5.5 introduces the a new esxcli network ip neighbor remove command to clear the ARP cache table.

To clear the ARP cache table in ESXi 5.5, use this command:
esxcli network ip neighbor remove [options]
Where options includes:
  • -i string or --interface-name=string

    Where string is the name of the VMkernel network interface from which the neighbor entry must be removed. If this option is not specified, the neighbor is removed from all interfaces.
  • -a address or --neighbor-addr=address

    Where address is the IPv4/IPv6 address of the neighbor. This is mandatory.
  • -N instance or --netstack=instance

    Where instance is the network stack instance. If unspecified, the default netstack instance is used.
  • -v number or --version=number

    Where number is the IP version and can either be 4 or 6. This is mandatory.

For example, to delete the ARP entry for address 10.131.0.103:

  1. Connect to the ESXi 5.5 host using SSH. For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
  2. View the current ARP table using this command:

    # esxcli network ip neighbor list

    You see output similar to:

    Neighbor      Mac Address        Vmknic    Expiry  State  Type
    ------------  -----------------  ------   -------  -----  -----
    10.131.0.103  00:1c:c4:a9:6f:fc   vmk0     908 sec        Unknown
    10.131.0.179  00:1e:0b:bf:7a:50   vmk0    1062 sec        Unknown

  3. To delete the ARP entry for address 10.131.0.103, run one of these commands:

    • # esxcli network ip neighbor -v 4 -a 10.131.0.103
    • # esxcli network ip neighbor --version=4 --neighbor-addr=10.131.0.103
  4. View the ARP table again using this command:

    # esxcli network ip neighbor list

    You see output similar to:

    Neighbor      Mac Address        Vmknic    Expiry  State  Type
    ------------  -----------------  ------   -------  -----  -----
    10.131.0.179  00:1e:0b:bf:7a:50   vmk0    750 sec        Unknown

Thursday, June 13, 2013

Dell PowerEdge VRTX : Dell's Introduction into the future of Compute?

I had the pleasure of Attending Dell Enterprise forum this year. If you have not noticed Dell is growing and increasing it's server market. "In the past three years, Dell has managed to grow its servers and networking sales from $7.6 billion to $9.3 billion" Source http://www.insidermonkey.com/blog/dell-inc-dell-cisco-systems-inc-csco-are-growing-in-the-declining-global-server-market-168461/

Further stepping into the compute market, Dell announced the  PowerEdge VRTX shared infrastructure platform.



"The PowerEdge VRTX is a shared infrastructure platform offering extensive performance and capacity with office-level acoustics in a single, compact tower chassis. It is an ideal solution for small and midsize businesses as well as remote and branch offices of large enterprises. The simple, efficient and versatile platform can be rapidly deployed to consolidate and manage business applications in two or three virtualized servers with shared storage and integrated networking."



Source http://www.dell.com/us/business/p/poweredge-vrtx/pd 

Is Dell making a move into the unknown? Lets take a look.



How does it work? I will give you a high level overview, and point you in the right direction for a deep drive. Think of it as a small scale, remote office blade center.

Tech Specs:

"
Feature
PowerEdge VRTX Technical Specifications
Chassis enclosure
Form factors:
Tower or 5U rack enclosure
Tower configuration:
48.4cm (19.1in) H with system feet x 31.0cm (12.2in) W with system feet opened x 73.0cm (28.7in) D
Weight (empty) = 31.7kg (69.7lb)
Weight (maximum) = 74.8kg (164.9lb)
Rack configuration:
21.9cm (8.6in) H x 48.2cm (19.0in) W x 73.0cm (28.7in) D
Weight (empty) = 24.7kg (54.5lb)
Weight (maximum) = 68.7kg (151.5lb)
Server node options
Dell PowerEdge M620 and M520 servers
Power supplies
Redundant power supply units:
110/220V auto-sensing
Redundant power supplies support 2+2 (AC redundancy), and 3+1, 2+1, and 1+1 (power supply
redundancy) modes
Cooling
VRTX comes standard with 6 hot-pluggable, redundant fan modules and 4 blower modules:
Based on Dell Energy Smart Technologies, VRTX fans and blowers are a breakthrough in power and
cooling efficiency
The fans and blowers deliver low-power consumption, but also use next-genera
tion fan technologies
to ensure the lowest possible amount of fresh air is consumed to cool the enclosure
Input devices
Front control panel with interactive graphical LCD:
Supports initial configuration wizard
Local server blade, enclosure, and module information and troubleshooting
Two USB keyboard/mouse connections and one video connection for local front “crash cart” console
connections
Optional DVD-RW
Raid controller
Shared PERC8
Drive bays and
hard drives
Up to 12 x 3.5in NLSAS, SAS, or SAS SSD hot-plug drives or
Up to 25 x 2.5in NLSAS, SAS, or SAS SSD hot-plug drives
Embedded NIC
1GbE internal switch module (standard) with 16 internal 1GbE ports and 8 external ports
Ethernet pass-through module with 8 external ports (optional)
I/O slots
8 flexible PCIe slots:
3 full-height/full-length slots (150W) with double-wide card support (225W)
5 low-profile/half-length slots (25W)"




 


As you can see, this is a powerful product. I ask my self, is this the first step of many for Dell into a new Future? Will Dell tackle scale computing? We will have to wait to see. I for one, am looking forward to the next lineup of Dell PowerEdge server releases.


As promised, I wanted to share a deep drive post from titled : A Detailed Look at Dell PowerEdge VRTX



Roger L