Friday, July 3, 2015

Open stack cinder: block storage on the open source cloud platfrom

The OpenStack platform is an open-source collaboration to develop a private cloud ecosystem, delivering IT services at web scale.
OpenStack is divided into a number of discrete projects, each with a code name with parallels to the purpose of the project itself.
Virtual machines – or compute – are delivered through a project called Nova. In early OpenStack implementations, Nova virtual machines were stateless, that is they were not kept on persistent storage, and a Nova virtual machine would lose its contents when it was shut down.
As Nova developed, a feature called nova-volume was introduced to store virtual machines on persistent media, similar to the way Amazon Web Services Elastic Cloud Compute (EC2) stores instances on persistent media known as Elastic Block Store (EBS). The nova-volume feature was eventually superceded by a separate project called Cinder that delivers persistent block-level storage to OpenStack environments.
Cinder performs a number of operations in OpenStack environments. In the first instance it acts as a piece of middleware, providing application programming interfaces (APIs) that allow Cinder volumes to be created through use of the Cinder client software. A single Cinder volume is associated with a single Nova compute instance or virtual machine. Cinder keeps track of the volumes in use within OpenStack using a MySQL database created on the Cinder services controller.
Through the use of a common interface and APIs, Cinder abstracts the process of creating and attaching volumes to Nova compute instances. This means storage can be provided to OpenStack environments through a variety of methods.
By default, Cinder volumes are created on a standard Linux server that runs Logical Volume Manager (LVM). This allows physical disks to be combined to implement redundant array of independent disks (RAID) data protection and to carve out logical volumes from a physical pool of space, called a volume group. Cinder volumes are created from a volume group called cinder-volumes, with the OpenStack administrator assigned the task of deciding exactly how this LVM group is mapped onto physical disk.
Cinder can also manage external storage resources, either from a physical external storage array or from software-based storage implementations. This is achieved through the use of a Cinder driver that maps Cinder requests to the commands required on the external storage platform – in fact, the default LVM implementation is simply another Cinder driver. Support is available for iSCSI and Fibre Channel protocols, with specific support based on the capabilities of the supplier’s storage hardware (see the support matrix described later).
Storage suppliers have been quick to provide Cinder support for their platforms, enabling a wide range of storage hardware to be used in OpenStack deployments. Depending on the specific implementation, the driver allows OpenStack to automate the process of creating volumes and assigning them to Nova virtual machines.
Some hardware platforms require storage administrators to create a pool – or pools – of storage for OpenStack to use – traditional arrays that use pools of RAID groups, for example.
A list of supported platforms is available but this isn’t exhaustive and many suppliers are not mentioned. You should check with your storage supplier for specific information on Cinder support and the features their drivers offer.
The use of external storage for OpenStack provides the ability to take advantage of native features on the storage platform where available, such as data deduplication, compression, thin provisioning and quality of service.
External storage isn’t limited to physical hardware appliances; block storage can be assigned to OpenStack from a variety of software-based systems, both commercial and open-source. This includes Ceph and GlusterFS. Ceph, for example, is implemented through the use of Rados Block Device (RBD), a device driver in the Linux kernel that talks natively with a Ceph storage cluster.
With each successive release of OpenStack (the most recent being Kilo – versions are named after successive letters of the alphabet), new features have been added to Cinder. Some of these have been implemented through a second version of the Cinder API, as version one didn’t have support for the newer features.
Version one and two APIs provide commands to create, update, delete and extend volumes, as well as attach and detach them to instances – Nova virtual machines. Volumes can be assigned volume types, allowing them to be matched to a specific storage provider, where an OpenStack deployment takes storage from multiple providers. Alternatively, volume types can be used to differentiate between different classes of storage, based on, for example, physical characteristics such as RAID protection or performance.
OpenStack Manila is the file level access method in development by the open source cloud platform. What is it, how does it work and when will it be ready?VMware vs OpenStack: The opportunities and challenges presented by the two virtualisation environments when it comes to storage, backup and disaster recovery
Cinder provides the ability to take snapshots of Nova instances. For external storage platforms this is achieved by using the native snapshot process of the underlying storage platform. The Juno release of OpenStack introduced the ability to group Cinder volumes into a consistency group, allowing all the volumes to be taken as a single snapshot. To date, only a few suppliers support this functionality.
Cinder also supports the ability to take Nova instance backups. Unfortunately, this process is limited to using an object store as the backup target, with restores that require restoration of the entire volume. This may prove limiting in many circumstances and is one reason why Manila – the OpenStack file services project – could provide a more appropriate way to manage application data.
OpenStack distributions are available from a wide range of suppliers – well over 20 at last count. Each supplier provides support for a specific release of OpenStack and for each of the core OpenStack components. Cinder is a core component and ships with each distribution. The OpenStack marketplace provides a list of suppliers and their offerings. This also lists each of the projects and their supported levels as well as the supported level of the APIs. Today almost all suppliers support the version two API for Cinder.
Cinder provides great flexibility to add storage to OpenStack environments, whether through native LVM support or via an external appliance or software. However, as with all components of OpenStack, Cinder requires time and effort to understand and configure correctly. This is especially true for the process of fault diagnosis, such as volumes getting out of sync in the Cinder database.

In addition, users should remember that Cinder drivers could be removed as well as added to new OpenStack distributions if suppliers don’t meet the specifications or requirements set by the OpenStack project. This could cause problems for future upgrades and so it is always worthwhile checking the commitment of your storage supplier to supporting Cinder features and future OpenStack releases.

Swedish could provider gets solidfire all-flash storage for open stack

Swedish service provider Elastx has opted for Solidfire all-flash arrays to power its OpenStack-based cloud services for customers.
The Stockholm-based company delivers platform- and infrastructure-as-a-service and cloud services to mostly Sweden-based customers, ranging from individuals to enterprise-class organisations, many in the media sector.
It initially deployed Solidfire three years ago when it started up, but needed storage that would integrate with its OpenStack cloud infrastructure after a year or so.
Elastx CEO Joakim Öhman said he had looked at general storage for all its platforms. “But we knew we would deploy OpenStack, so we wanted something that would integrate well with it that was SSD-based and that would scale horizontally," he said.
“Solidfire was the only system at the time that could fully meet these requirements. There was no-one else with OpenStack compatibility.”
Elastx deployed four 3TB nodes of Solidfire SF3010 all-flash storage. These use 300GB MLC flash drives. Solidfire hardware scales in 1U nodes that go from four up to 100, with I/O performance reaching up to 5 million IOPS.
Solidfire started out aiming at cloud service providers and is Fibre Channel and iSCSI block storage. With cloud in mind, it has automation and multi-tenancy functionality and administrators can assign storage volumes with different characteristics to different customers.
More recently, Solidfire added advanced storage features such as replication and other data protection features to appeal more to the enterprise market.
Solidfire provides APIs to allow its storage to be used with OpenStack’s Cinder block storage.
Öhman said: “We added a couple of lines of configuration into OpenStack to integrate with Solidfire. Then we could provision it to different storage volumes that gave different levels of performance according to customer requirements.”
The alternative at the time, said Öhman, would have been to use the native Cinder block storage functionality in OpenStack – but that this would have needed the company to build its own hardware.
Öhman said the key benefits of Solidfire had been its integration with OpenStack and that it hasn’t suffered a single outage in that time: “It has performed as we expected it to and it has the ability to scale up,” said Öhman.

The key measurable benefit has been that Elastx set a target for the amount of time it wanted to spend on management each month – and the actual total has come in well below that.

Tablets replace need for multiple device in the enterprise

Businesses are increasingly turning to tablets as their only computing device, according to analyst firm IDC.
In a recent study, IDC found that tablets were the only business device used by 40% of respondents. Analysts reported that hybrid products designed to replace portable and desktop PCs are driving up tablet adoption in the enterprise.
The study also showed that hybrid devices – in either the detachable format, whereby the removable keypad allows the touchscreen to double as a tablet, or the convertible form factor, in which the notebook's hinge rotates 360 degrees for a similar effect – are usually purchased with larger screen sizes than standard tablets.
While just over 10% of all slates have a screen size larger than 11in, almost 30% of hybrids currently exceed this size – expected to rise to over 50% over the next couple of years – which reinforces the assumption that two-in-ones and convertibles can be a replacement for portable PCs.
For instance, Microsoft's recently introduced Surface 3, powered by the Intel Atom processor, offers a 10in 1920x1024 high-definition screen and 64GB of SSD storage. Given that it runs Windows 8.1, such hybrid devices can be a good fit for businesses since there is no need to find alternatives for existing PC applications.
Marta Fiorentini, senior research analyst at IDC, said: “A large share of tablets is already used by employees as their only work tool, either replacing traditional client devices or for functions previously not supported by any computing device.
“As digitisation transforms business processes and tablets are optimised for business functions from both a hardware and application standpoint, we can only expect an increase in the share of standalone tablets, as confirmed by the purchase intentions of the study respondents.”
IDC found that the use of tablets as standalone or companion devices has a strong correlation with the user’s job role.
User groups generally associated with activities involving document creation or editing, such as executives, marketing staff, sales staff, engineers or white-collar employees (including analysts, consultants, doctors, those in the legal profession and journalists) tend to use their tablets in addition to desktop or portable PCs.
On the other hand, workers who perform all or the majority of their activities on the road, in the field or with customers are more likely to rely solely on their tablets.
Operation agents and production workers equipped with tablet slates use them as their only work device in, respectively, 55% and 64% of cases. In comparison, only 38% of executives and 44% of white-collar workers use only their tablet slates.

Having a small, lightweight laptop is a must-have for commuters. The Microsoft Surface 3, with 10 hours of battery life and the ability to charge via the USB port, could make a great travelling companion.
The detachable keyboard is the same as the one on the Surface Pro, and typing is comfortable, helped by the Surface 3’s stand which allows the keyboard to be raised slightly.
In tablet mode, weighing 798g, the Surface 3 is heavier than an Android tablet or iPad, but then it is a full-blown PC and features a USB 3.0 high-speed connection.

The screen is bright and seems to work well for viewing Netflix and web browsing. The only downside is that it runs Windows 8.1 – once Windows 10 is released, however, the Surface 3 will be an ideal hybrid PC/tablet for use when commuting.

The rise of general-purpose al and its thread to humanity

Autonomous cars, automated trading and smart cities are among the great promises of machine intelligence. But artificial intelligence (AI) promises much more, including being man’s best friend.
Bigdog (picture above) was a robot developed in 2008, funded by Darpa and the US Army Research Laboratory’s RCTA programme. Bigdog was designed to walk and climb – skills that humans master instinctively at an early age, but which cannot easily be programmed into a machine. Instead, researchers apply artificial intelligence techniques to enable such robots to ‘learn’.
Imagine a computer that can think better than humans; that can make profound cognitive decisions at lightning speed. Such a machine could better serve mankind. Or would it?
“AI that can run 1,000 times faster than humans can earn 1,000 times more than people,” according to Stuart Armstrong, research fellow at the Future of Humanity Institute. “It can make 100 copies of itself.”
This ability to think fast and make copies of itself is a potent combination – one than could have a profound effect on humanity. “With human-level intelligence, plus the ability to copy, it could hack the whole internet,” he warned.
And if this general-purpose AI had a body, said Armstrong, “it could walk into a bar and walk out with all the girls or guys”.
But far from being a super hacker or master pickup artist, Armstrong argues that if such machines were to become powerful, the world would resemble their preferences. For instance, he said they could boost their own algorithms.
A socially aware general-purpose AI could scan the web for information and, by reading human facial expressions, it could deliver targeted speeches better than any political leader, said Armstrong.
Taken to the extreme, he warned that it is difficult to specify a goal that is safe: “If it were programmed to prevent all human suffering, the solution could be to kill all humans”.
In his book Smarter than us, Armstrong lays down a few points than humans need to consider about general-purpose AI: “Never trust an entirely super-intelligent AI. If it doesn't have your best interests at heart, it'll find a way to obey all its promises while still destroying you.”
Such general-purpose artificial intelligence is still a long way off. Armstrong’s closest estimate of when general-purpose artificial intelligence will be developed falls somewhere between five and 150 years’ time. But it is a hot topic, and London-based DeepMind recently demonstrated how a machine used reinforcement learning to take what it had learned from playing a singe Atari 2600 game and apply it to other computer games.
Strictly speaking, DeepMind is not general AI, according to Armstrong. It is narrow AI – a form of artificial intelligence that is able to do tasks people once said would not be possible without general-purpose AI. IBM's Watson, which won US game show Jeopardy, and Google Car are both applications of narrow AI.
Gartner distinguished analyst Steve Prentice said narrow AI is a machine that does one task particularly well: “The variables have to be limited, and it follows a set of rules.” For instance, he said an autonomous vehicle could be programmed in a way that could prevent cycle road deaths.
In the Gartner report When smart things rule the world, Prentice argues the case for CIOs to start thinking about the business impact of smart machines that exhibit AI behaviour. In the report, he notes: “Advanced capabilities afforded by artificial intelligence (AI) will enhance today’s smart devices to display goal-seeking and self-learning behaviour rather than a simple sense and respond.” Prentice believes these “artificial agents” will work together with or on behalf of humans to optimise business outcomes through an ecosystem or digital marketplace.
For CIOs, Prentice regards autonomous business as a logical extension of current automated processes and services to increase efficiency and productivity rather than simply to replace a human workforce. “For most people, AI is slanted to what you see on-screen. But from a business perspective, we are so far away from this in reality,” he said.
In fact, he believes there is no reason why a super-intelligent AI machine could not act like a CEO or manager, directing humans to do tasks where creativity or manual dexterity is important.
This may sound like a plot from Channel 4 sci-fi drama Humans, but, as Armstrong observes in Smarter than us, “Even if the AI is nominally under human control, even if we can reprogram it or order it around, such theoretical powers will be useless in practice. This is because the AI will eventually be able to predict any move we make and could spend a lot of effort manipulating those who have ‘control’ over it.”

So back to man’s best friend. Armstrong is not afraid of a metal-clad robot with an Austrian accent that Arnold Schwarzenegger depicted in Terminator. For him, a super-intelligent machine taking the form of a dog and biting the proverbial hand that feeds it is a far more plausible way in which machines could eventually rule the world.

Windows 10 release won't save the PC market

The release of Windows 10 is unlikely to make much difference to the long-term prospects of the PC market, Gartner has warned.
According to the IT analyst house’s second quarter IT spending forecast, the release of Windows 10 on 29 July 2015 looks set to lead to a surge in PC sales from then until early 2016, but this won’t be enough to reverse the continued decline of the wider PC market.
Speaking to Computer Weekly, John-David Lovelock, head of forecasting at Gartner, said PC replacement cycles have lengthened in anticipation of Windows 10’s release over the past 18 months, but 2016 will remain a flat year for PC sales overall.
“In other words, it will be a good year, as flat is the new up, where PC sales are concerned. But in general, there will not be any significant new PC purchases, it’s mainly going to be replacements of existing PCs. The idea that Windows 10 will save the PC is just not going to hold true,” he said.
Microsoft’s next-generation operating system (OS) is being ushered in as a replacement for the much-maligned Windows 8/8.1.
The latter was released in October 2012 as the successor to the hugely popular Windows 7, but struggled to gain much in the way of traction in the business and consumer world, thanks to complaints about its clunky user interface and the removal of the start menu.
Microsoft moved to correct this with the release of Windows 8.1 in August 2012, but despite a sizeable fanfare, the OS struggled to make much of a dent in the market share of either Windows XP or Windows 7.
According to NetMarketshare’s monthly look at worldwide usage of various desktop operating systems, Windows 7 remains installed on nearly 58% of PCs as of May 2015, while XP features on 14.60%, despite entering end of life in April 2014.
By contrast, Windows 8.1 runs on 12.9% of desktops and Windows 8 on just 2.57%, NetMarketshare’s figures show.
Lovelock said Gartner anticipates Windows 7 will retain its hold in the consumer market for some time to come, but, having exited mainstream support in January 2015, the enterprise will be quicker to make the move to Windows 10.

“We’re looking forward to a good replacement cycle around Windows 10. There is still going to be that wide histogram of reactions between people who want to move right now verses those who are still on Windows 95, but it’s not going to be the damp squib Windows 8 was,” Lovelock added.

Windows Server 2003 end of support five options to choose form

Microsoft’s withdrawal of support for Windows Server 2003 on 14 July is a deadline many IT departments have not been looking forward to.
Industry estimates indicate that upwards of a fifth of servers are still running this version of Windows Server, which has now reached the end of its life as far as Microsoft is concerned.
Organisations will have the option to pay a premium for custom support contracts, but some businesses may find that the option to migrate to a newer operating system (OS) is out of their control
In November 2014, US-Cert issued a warning about the end of support deadline, stating: “Computers running the Windows Server 2003 operating system will continue to work after support ends. However, using unsupported software may increase the risks of viruses and other security threats. Negative consequences could include loss of confidentiality, integrity and/or availability of data, system resources and business assets.”
In a report titled Windows Server 2003 end of life: An opportunity to evaluate IT strategy, analyst company IDC warned that organisations could face problems with regulatory compliance if they remain on Windows Server 2003.
“Failure to have a current, supported operating system raises significant concerns about an organisation’s ability to meet regulatory compliance requirements, as well as the needs of business units, partners, and customers,” the IT research firm noted in its February 2015 report.
But Windows Server 2003 is still dominant. According to CloudPhysics, which provides big data analytics for datacentres, one in five Windows Server virtual machines (VMs) runs the 2003 version, and thus will be affected by the removal of support.
And while Windows 2003 VM share is declining, given the current rate of decline CloudPhysics estimated that the proportion of servers still running the unsupported OS would reach a statistically insignificant level in the first half of 2018, three years after support ends. “This is a relatively faster decline than Windows 2000, which reached end of life in 2005 but retains a 1% share 10 years later,” the firm said.
According to CloudPhysics, since virtualisation separates PC server hardware from the OS, legacy operating systems can exist for much longer since they are able to run on newer servers.
In a blog post, Krishna Raj Raja, a founding member of CloudPhysics, noted that prior to virtualisation a server refresh generally required an OS refresh. “Newer hardware typically has limited or no support for legacy operating systems, so upgrading the OS became a necessity. With virtualisation, however, the hardware and the OS are decoupled, and therefore OS upgrades are not a necessity,” said Raj Raja.
Given that VMware announced support for 64-bit operating systems in 2004, and vSphere supports both 32-bit and 64-bit operating systems simultaneously, there is no need to choose one over the other, according to Raj Raja, with a legacy 32-bit OS (and even 16-bit OS) able to continue to co-exist with newer 64-bit operating systems.
“VMware’s support for legacy operating systems is excellent. It is possible to run a legacy OS such as Windows NT on modern processors that Windows NT natively wouldn’t even recognise. Also, the virtual devices in VMs provide encapsulation and prevent device driver compatibility issues,” said Raj Raja.
Dell Software president John Swainson said some organisations are upgrading to Windows Server 2008 as it is less disruptive than going to Microsoft’s newest version, Windows Server 2012 R2.
In a recent interview with Computer Weekly, he said he had seen a number of organisations simply migrate to Windows Server 2008, as it is still a supported operating system and does not require the major application reworking associated with shifting the whole Windows Server infrastructure onto Windows Server 2012.
“Moving to Windows 2012 requires changing applications, and is a far more expensive upgrade from Windows Server 2003,” he said.
In the Gartner paper Managing the risks of running Windows Server 2003 after July 2015, one of the suggestions analyst Carl Claunch made for those systems that cannot be moved is to run a demilitarised zone (DMZ).
“The concept of a demilitarised zone has been frequently used to isolate systems that are accessible by outsiders, to minimise what they could do to the rest of the datacentre if they become compromised. Further, much tighter control can be placed on which other systems they are permitted to contact and the types of access allowed,” he wrote.
“This may reduce the usability of a system, but it may be better than the alternative of losing all use if a new vulnerability becomes known. The nature of the vulnerability and the usefulness of the system in that case will help decide whether a DMZ may be sufficient to address risks.”
Could Linux be a viable option? Red Hat argues that since organisations moving to Windows Server 2012 would incur considerable costs, assessing the viability of running workloads on Linux should not be discarded.
“If your organisation is running Windows Server 2003, now is the time to carefully consider Linux. If you upgrade to a new Windows infrastructure, 2008 or 2012, you’ll incur significant expenses associated with additional licences, client access licences, software licences, migration and future maintenance,” claimed Red Hat in its Migrating from Windows to Red Hat Enterprise Linux executive brief.

The cloud is another option. Why run a file server on-premise if a cloud service such as Box can be used instead? Application servers may be run more cost effectively on the public cloud.
Certainly, moving to the next supported release of Windows Server is not the only approach an IT department can take. Overall, the end of support for Windows Server 2003 represents an opportunity for CIOs to reassess their legacy Windows server applications and a chance to drop them or re-engineer them to run on a different platform.