Categories
Storage Virtualization

Innovative Virtual Cache Solution Using SSDs to Increase Virtual Machine Density

VMworld US 2012: Proximal Data, the leading provider of server-side caching solutions for virtualized environments, today announced a technology collaboration with Micron Technology Inc. (Nasdaq: MU), one of the world’s leading providers of advanced enterprise flash storage. Proximal Data’s AutoCache™ fast virtual cache software, when combined with Micron’s P400e SATA and P320h PCIe solid-state drives (SSDs), eliminates I/O bottlenecks in virtualized servers and increases virtual machine (VM) density up to three times.

“With our combined solutions, Proximal Data and Micron are changing the face of the virtualized enterprise data center and delivering a unique new solution for maximizing virtual machine density and performance,” said Ed Doller, vice president and general manager of Micron’s enterprise SSD Division. “Customers using Proximal Data’s AutoCache software with Micron’s P400e SATA or P320h PCIe solutions will dramatically improve their ability to support more business applications in virtualized environments while realizing the benefits of greater storage performance, endurance and data reliability.”

“By supporting Micron’s leading enterprise solid-state storage, we are providing enterprises with higher levels of efficiency in how their virtual machines process storage I/O. As a result, they can get more out of their virtualized servers and lower their total cost of ownership,” said Rory Bolt, CEO of Proximal Data. “With AutoCache, we offer a next-generation solution in server virtualization optimization, focused on improving virtual machine density and performance without increasing latency or impacting IT operations.”

Proximal Data’s recently announced AutoCache software works seamlessly within a hypervisor such as VMware ESXi to increase VM density up to three times with absolutely no impact on IT operations. AutoCache also does not require agents in guest operating systems. AutoCache places hot I/O into the Micron flash device to intelligently supply priority data traffic to all VMs. By removing the I/O bottleneck, VM density can be increased and efficiency is improved while minimizing system resource usage. This frees up CPU capacity to support more business applications. AutoCache is also integrated within VMware management environments (vCenter, vSphere) and does not impact processes such as those conducted by vMotion.

Micron’s P400e SATA SSD provides reliable high performance for read-intensive applications and serves as a primary drive in applications that need advanced endurance and data reliability. Micron’s P320h is a low-latency, high-IOPS PCIe SSD providing the highest read throughput in the industry and the ideal solution for optimizing applications with heavy read access. Both drives feature high lifetime endurance, enterprise-level data protection and outstanding power efficiency.

Proximal Data will be exhibiting at VMworld 2012 (Booth# 522), Aug. 26-30 in San Francisco, where they will be raffling a Micron P400e drive.

Micron will also be exhibiting at VMworld 2012 (Booth# 2430).

Categories
Server Virtualization

NextIO Updates vNET Solution for Streamlined Server I/O Management

NextIO, the pioneer in I/O consolidation and virtualization solutions, today announced significant enhancements to its vNET I/O Maestro solution with the release of nControl Version 2.2. The nControl management software, which is a standard feature in vNET I/O Maestro, includes new functionality to simplify assignment of resources to virtual machines, along with support for VMware vSphere 5, three additional new operating systems and new hardware. The vNET I/O Maestro – which can achieve up to 40% CapEx and 60% OpEx savings versus conventional datacenter networking approaches – is the only top of rack I/O fabric solution that provides seamless integration with existing and future servers, networks, and storage fabrics without requiring proprietary drivers and/or changes to existing governance processes.

NextIO’s nControl management software, initially launched in October 2011 on the vNET I/O Maestro, provides remote centralized management of complex server I/O. nControl provides multiple interfaces to control NextIO appliances, including a secure web interface, command line interface, API, and SNMP traps. With nControl, the I/O resources of the vNET I/O Maestro (10Gb Ethernet and 8Gb Fibre Channel) can be assigned to connected servers with drag and drop simplicity.

“We decided to use vNET because it would reduce our cost per virtual machine and get new customers up quicker than we could achieve with traditional datacenter networking technologies, improving our competitiveness in the Managed Services Provider market,” said Brian Form, managing director for Blue Chip. “The addition of support for VMware vSphere 5 ensures that we will be able to realize these benefits on all of the X86 servers across our datacenters. This allows Blue Chip to continue to provide the best technologies and highest levels of service to our customers.”

“Our organization has to provide consumers with more ways to access information about the quality of healthcare in the United Kingdom, while keeping budgets in check,” stated Tim Palmer, head of IT operations at Dr. Foster Intelligence. “We looked to NextIO as a way to reduce costs in our next-generation server clusters. The latest version of nControl on vNET allows us to achieve these goals.”

New features of nControl 2.2 include support for VMware vSphere 5, Citrix XENServer, Linux KVM Hypervisor, and Microsoft Hyper-V. nControl 2.2 also includes new hardware support for 10Gb Dual Port Ethernet Module with Long Reach Media and Direct Attach Media. Other new features include enhanced I/O statistics for virtualized Ethernet and Fibre Channel connections.

“Our goal as a company has always been to simplify the implementation and management of I/O in enterprise datacenters,” said Steve Knodl, director of product management for NextIO. “The latest version of nControl allows us to apply this simplicity to more environments, enabling more users to realize the CapEx and OpEx savings that vNET can provide.”

Categories
Cloud Computing Server Virtualization

Lack of Communication and Cross-Domain Tools Impede Collaboration Needed for VM Deployments

Research Analyst firm Enterprise Management Associates® (EMA™) and Infoblox Inc. today announced results of a recent survey assessing the impact of cross-team collaboration on virtual machine (VM) deployments.

The survey results reveal that VM deployments require significant involvement across multiple IT departments, not just the server virtualization team, including datacenter/systems administration (physical infrastructure), networking operations, network/application security, storage, and application support (deployment and operations).

More than 40% of survey respondents indicated that their organizations were provisioning 500 or more VMs per month, with some reporting provisioning as many as 5,000 VMs in a month. This volume of VM deployment is a recipe for stress across the enterprise, but particularly among the various groups whose efforts must align and mesh to assure successful deployments.

The majority of survey respondents described overall collaboration across the VM lifecycle as formalized, but manual. This translates into higher-touch and less efficient processes than those that have employed automated approaches. If VM deployments continue to occur at accelerating rates, higher levels of efficiency will be required to ensure timely and properly configured VMs.

“Server virtualization is on the rise and cloud applications will continue to fuel their growth. The more manual the VM lifecycle processes are, the more prone they are to delays and errors. The purpose of moving to virtualized infrastructures is to speed up the deployment and provisioning processes,” said Tracy Corbo, Principal Research Analyst at Enterprise Management Associates. “The VM deployment and provisioning process demands cross-team collaboration and it requires breaking down the silos and implementing cross-domain tools.”

Most respondents agreed that integrated multi-disciplinary IT teams, additional staff, and fully automated, cross-domain orchestration tools would help accelerate the end-to-end VM deployment process. However, when participants were asked to name the greatest inhibitors preventing more proactive (vs. reactive) cross-team collaboration, the majority (57%) said they were just too busy and understaffed.

“The survey clearly shows that enterprise IT team members acknowledge that cross-team coordination, communication, and collaboration tools are required to improve IT team efficiency and virtual deployment success,” said Steve Garrison, Infoblox Vice President of Corporate Marketing. “More than ever, Infoblox solutions are clearly aligned with the needs of the marketplace through our ability to provide network control and automation solutions that significantly reduce the need for highly manual, repetitive, and often error-prone tasks that can create ‘drag’ in virtual deployments.”

For example, Infoblox announced today new automated network control capabilities and an intuitive Automation Task Board designed to enable various IT department members to initiate multi-step, often-repeated, and time-consuming network tasks with simple mouse clicks, all while providing cross-team visibility and auditability (see related announcement: Infoblox Bridges Siloed IT Structures and Delivers Operational Efficiencies Required to Maximize Cloud Computing Deployments).

To view detailed survey results visit: http://www.infoblox.com/en/resources/network-automation-center.html.

Categories
Server Virtualization

PHD Virtual Records Record Revenue in 2011, Virtualization Product Portfolio Expansion

PHD Virtual Technologies, pioneer and innovator in virtual machine backup and recovery, and provider of virtualization monitoring solutions, announced record revenue for fiscal year 2011. PHD Virtual continued its trend of strong quarterly growth, with the fourth quarter of 2011 marking the sixth consecutive quarter of record revenue growth for the company. The company achieved significant growth in its customer base with new customer acquisition growing by 75%.

PHD Virtual recorded strong product growth across its VMware and Citrix product lines both domestically and internationally during the year. The company invested in expanding its business with the addition of key distributors in international markets which helped to accelerate international revenue growth, including more than a 50% increase in revenue from the EMEA region. The company also saw success expanding into new markets and increasing sales to existing customers with new releases of its data protection solution and the addition of its new comprehensive monitoring solution, PHD Virtual Monitor.

The new PHD Virtual Backup and Replication version 5.3 was released in the fourth quarter, extending the company’s flagship data protection solution with virtual machine replication, faster performance, and flexible archiving technologies that provide an easier, more cost effective solution for disaster recovery of VMware and Citrix virtual environments. Its PHD Virtual Backup solution continued to achieve significant competitive wins over other vendors including Veeam, Quest and Symantec. Most commonly cited by customers was PHD Virtual’s ability to provide a robust data protection and disaster recovery solution that was significantly easier and more cost effective than alternatives.

“The business continued to grow sharply through 2011 and we are very pleased to record our sixth consecutive quarter of record growth,” said Thomas Charlton, chairman and CEO, PHD Virtual. “We built a solid foundation for the business throughout the year, while expanding our portfolio of data protection and management solutions to take advantage of continued growth in the virtualization market. 2012 will see continued expansion of our product portfolio with planned enhancements and new technologies targeted at optimizing data protection, monitoring and management for the rapidly growing Cloud, Service Provider and Virtual Desktop (VDI) Markets.”

Categories
Blog & Tutorials

VMware: Unable to migrate VM, missing snapshot file and out of space Error

The error in the headline is right out of snapshot hell. If you have virtual machines (VMs) with large memory requirements, you probably know that you need extra space on the datastore to store the Memory Swap file (.vswp).

When datastore housing the VM runs out of diskspace, you will not be able to create new VMs, power on existing VMs, and may notice performance issues with the VMs that are running.

Here is how to fix the error and successfully vMotion the VM:

Turning OFF or reducing the memory size on VM will reduce the amount of space required for memory swap file and that will free up disk space on the datastore (housing other VMs).

Using VMware converter to copy the VM to another host or datastore, the new VM will not have a snapshot.

For Future or NEW VMs, you also have an option to store the swap files with the VM or another datastore on non-replicated LUN. In vCenter, click on a HOST, go to CONFIGURATION tab and then “Swap file location” then click on EDIT for list of options.

In order to avoid dealing with .vswp file (allocating extra space for those files), you may make reservation for the virtual machine that is equal to amount of RAM assigned to it. For example, if you have a VM with 4GB RAM assigned, you can edit the VM settings > Resource tab to get to the reservation settings for memory, CPU of a VM.

I hope above helps you avoid future out of space or missing snapshot errors. If you have questions or comments or another virtualization support issue, please post in our discussion forums.

Categories
Cloud Computing Data Center Server Virtualization

CiRBA Releases Revolutionary Control Console for Virtual and Cloud Infrastructure

CiRBA Inc., a leader in Data Center Intelligence (DCI) software, today announced the general availability of Version 7.0 of CiRBA DCI-Control, which revolutionizes how organizations control virtual and cloud infrastructure. CiRBA’s new Control Console enables IT organizations to see in a single glance where attention is required at the VM, host, and cluster level, and then provides explicit instructions on what to do in order to eliminate risk and increase efficiency.

“CiRBA’s Control Console is the brain for new school data centers, giving an unprecedented level of control over virtualized infrastructure,” said Andrew Hillier, co-founder and CTO, CiRBA. “The ability to know precisely where to place workloads and how to allocate resources enables organizations to run leaner data centers and achieve their financial objectives for virtualization and cloud computing. CiRBA has removed the guesswork from operations by providing answers according to an organization’s operational policies and workload requirements. For the first time, infrastructure managers know exactly what needs to be done to operate efficient, worry-free virtualized infrastructure.”

“CiRBA Version 7.0 enables infrastructure owners to quickly and easily understand what is going on in virtual and cloud infrastructure and what they need to do to improve the current and future state of operations,” said Rachel Chalmers, Research Director Infrastructure Management for 451 Research. “CiRBA just gets visualization in a way that other companies don’t.”

Having Just the Right Amount of Infrastructure
Most organizations today combat risk in virtualized infrastructure by over-provisioning. Excess capacity and overly-conservative allocations create a costly buffer zone that erodes the ROI of virtualization and clouds. CiRBA’s Control Console reveals whether or not VMs, hosts or clusters have “Too Little Infrastructure” (resources in the red band), “Too Much Infrastructure” (resources in the yellow band) or are “Just Right” (resources in the green band). This provides a simple means of understanding what is at risk, what is inefficient, and where action needs to be taken. Infrastructure managers use the Spectrum view and explicit actions provided by CiRBA to work toward a simple goal of moving all of the entities in an environment into the green. This powerful management paradigm radically changes the way infrastructure requirements are viewed and managed, highlighting waste and inefficiency and the steps required to address these issues in a way not previously possible.

Predictive Analytics
Users can also see an environment’s status and requirements over time by leveraging historical, current and predictive views through the Control Console. The predictive analytics incorporate “Bookings” to reserve capacity for new workloads and hosts coming online (or systems leaving the environment) so that a comprehensive, forward-looking view is provided. This helps to ensure enough capacity is budgeted and the guesswork of determining future requirements is eliminated. The Action System in the Control Console leverages these analytics to provide users with details and automation options for workload placement changes, resource allocation changes and capacity changes recommended to optimize an environment.

Comprehensive Policy-Based Control
Only CiRBA’s analytics model an organization’s business and operational policies so that requirements for SLAs, regulations, DR, HA, and other critical criteria are reflected, measured and complied with to ensure low risk, highly actionable answers. Policies effectively form a “contract” that ensures the safe operation and appropriate placement of workloads. With Version 7.0, CiRBA has delivered a powerful new Policy Manager so that organizations can easily apply, tune and control policies.

CiRBA Version 7.0 ships with six standard policies based on best practices that are easily configured and applied through the Policy Manager. These policies include Production Critical, Production IT, Production Cloud, Production Batch / HPC, Pre-Production, and Dev / Test. Settings available through these policies relate to guest density, guest performance, availability, placement volatility, operational windowing, resource reclamation, compliance and automation.

“With increased adoption of shared resources in today’s IT environments, organizations simply cannot address the complexity introduced by workload mobility, dynamic resource allocations, and the need for agile decision-making using existing monitoring and capacity management tools,” said Andrew Hillier, co-founder and CTO, CiRBA. “It is only by leveraging predictive analytics that understand all of the workload requirements, constraints, patterns and policies, and using this to guide and automate workload placement and resource allocation, that organizations will achieve their goals of reduced infrastructure costs and increased agility.”

Categories
Storage Virtualization

Astute Networks Reduces ViSX G3 for VMware Pricing to Under $20,000 MSRP

Astute Networks®, the innovator of ViSX G3™ for VMware, featuring its patented Data Pump Engine™ technology and award-winning Networked Performance Flash™ architecture, today announced lower pricing that makes ViSX G3 more affordable than ever. Effective immediately, ViSX G3, with 100,000 IOPS of sustained random I/O performance, starts at under $20,000 MSRP.

“Our fully leveraged Channels model and ViSX G3 adoption have given us operational economies of scale that we are passing along to our partners to further drive demand,” said Steve Topper, CEO of Astute Networks. “Flash is white hot and I believe 2012 will be the year of low cost delivery—competitive and efficient $/GB—along with reliability, flash management and data protection. ViSX G3 solutions are well positioned to deliver this value with low cost, enterprise-class reliability and 100% sustained random I/O performance.”

ViSX G3 solutions enable organizations with VMware-based virtualized environments to lower total cost of ownership (TCO) and improve return on investment (ROI) by:

  • Efficiently supporting a higher number of virtual machines per host
  • Accelerating virtual machine and datastore performance
  • Virtualizing any database or application into production with sustained performance
  • Scaling the number of virtualized desktop (VDI) clients and performance
  • Improving data protection, backup and recovery

“ViSX G3 for VMware, with 100% enterprise-class flash memory (eMLC), delivers sustained performance equivalent to 1,000+ enterprise hard disks, yet fits in just 3U of space, uses less than 300 Watts of power, and costs less than $20,000,” said Omar Barraza, Director of Product Management and Product Marketing at Astute Networks. “Only ViSX G3, with its patented DataPump Engine and unique Networked Performance Flash architecture, allows any organization to accelerate many VMware virtual machines, databases and applications with 100% sustained random I/O performance for less than the cost of a single, high-performance host server.”

About ViSX G3 for VMware
ViSX G3 is a patented, purpose-built, network-based flash appliance for provisioning high performance datastores that are shareable by all virtual machines, across all servers, across pervasively deployed Ethernet networks. ViSX G3 enables faster and broader adoption and deployment of VMware-based server and desktop virtualization, and cloud computing, by complementing existing virtualized infrastructure with performance-optimized high reliability enterprise-class flash datastores. Each ViSX G3 adds 100,000 random IOPS of sustained performance, can support up to 64 hosts and their virtual machines (VMs) and starts under $20,000 MSRP—equivalent to about $300 per host and less than $50 per VM—including extended warranty, expert support and on-site service.

Categories
Blog & Tutorials

Invalid configuration for device 4 Error When Cloning or vMotion a Virtual Machine

When performing vMotion on a Virtual Machine (VM) or a template on vDS (virtual distributed switch) configuration, you may come across this error:
Invalid configuration for Device ‘4’

I experienced this error in my lab over the weekend and removing the VM from the vDS portgroup fixed the issue. According to VMware, the cause of this issue is that during the clone procedure, a vDS port requires a reservation. This issue occurs if the reservation expires too quickly.

Here is step by step how to (applies to vSphere 4.0/4.1, may also work for 5.0):

  1. Launch the vSphere Client
  2. RIGHT click on the VM which is producing this error and click on EDIT SETTINGS.
  3. Under the Hardware tab, check the settings for all the Network Adapter devices.
  4. Click on Network Adapter 1 and then change the Network Connection-Network Label from vDS portgroup to Standard vSwitch.
  5. Perform the vMotion or clone again
  6. After the task in #5 completes, add the VM back to the vDS portgroup.

You should be all done. One thing worth mentioning here is that when you start the vmotion/clone process on a VM, vsphere client does give you an ‘warning’ under the compatibility:

Network interface ‘Network Adapter 1’ uses newtork ‘Distributed Virtual Switch (uuid), which is not accessible.

But for some reason still allows the user to click “next” only to fail later in the task.

If you have any questions, comments or support issues, be sure to join us in our Virtualization Forum and register in order to ask any queries.