HP talks cloud delivery options, the importance of OpenStack, how it competes on price

An in-depth conversation with Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, about where Helion fits in, cloud consumption models and coming change.

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, brings an interesting perspective to his job given his former role as General Manager of Product Management for Windows Azure, Microsoft’s cloud platform. Network World Editor in Chief John Dix and Senior Editor Brandon Butler got Hilf on the line for his big picture view of the importance of OpenStack, why HP recently acquired Eucalyptus, the impetus to compete on price, and the various cloud delivery options customers are pursuing.

How do you position Helion and where does it fit into the market?

Helion is our brand name for our cloud product portfolio which allows customers to deploy in any cloud context, be it a private cloud or a public or a hosted cloud environment. The applications and data and virtual machines that are going to ride on top of that cloud infrastructure can behave consistently across those different environments.
Bill Hilf, HP Cloud

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud

Enterprises are really struggling trying to do the all-in-one cloud model. But they don’t only use a single operating system or database or management tool, so we believe they will need to create a hybrid cloud environment. It’s not so much because they want to, it’s because they need to given the reality of their existing IT environments.

And what is fundamentally different with our approach is we’re building a composable product portfolio so if a customer wants to have only, let’s say, an application platform or only an infrastructure as a service platform, or wants to bring existing hardware, be it HP or non-HP, into a cloud environment, we need to have something that is composable and flexible.

That led us to probably the most important design decision we made, which was to build this product portfolio with a deep spine of open-source technologies. So we have OpenStack at the core of our IaaS layer and Cloud Foundry at the core of our development platform, but it’s not limited to that. We also support a wide range of open source tools, different types of application technologies, different databases and multiple languages. Really our core DNA is building around open source, which means less vendor lock-in and more flexibility for enterprise customers.

We just started to ship the first production-ready GA version of the Helion OpenStack distribution and Helion development platform which we’ve been working on for the past year and a half, and there are a number of ways customers can pick it up. There is a community version users can download and play with for free, they can buy it as stand-alone software to run on their own gear, they can buy it pre-integrated with HP solutions, or they may consume everything as a service. The latter doesn’t have to be a public cloud. It might be a hosted environment inside an enterprise so the customer can consume everything internally to meet regulatory requirements or policies.

So that’s how it will manifest. Customers will have a choice of different cloud models.

So a customer could have you build a cloud within their organization and run it for them as a service?

Yes. So customers might say, “I want all the benefits of a cloud, the speed, the economics, the self-service, but I want it in my data center and I want you to fully manage it, either remotely or in my environment.” That’s particularly appealing to large enterprises and large government agencies. That model is coming up again and again, and there are lots of different terms for it. You can call it managed private clouds, or a cloud-enabled hosting environment, but it’s essentially what you said.

The capital expense is yours and the customer just pays a service fee?

There are all sorts of ways customers want the mathematics to work. Sometimes they’ll want to be an internal cloud broker, providing services to internal customers. We have a big media customer doing this. They have an internal portal that says, “Hey, do you want compute or storage or networking?” And the internal end user has no idea what is actually providing that. Behind the scenes, based on the requirements and the price point and the constraints the end user describes, they can deliver the services either from their Helion OpenStack private cloud or, in some cases, they go out to a public cloud.

So, for example, if a customer wants extreme commodity storage pricing and they have very few constraints on how that data is stored or where, this internal broker might go back with AWS, but it’s presented to the internal customer just as a storage resource. That’s a really common pattern right now. We call it ‘internal service providers’ but it’s kind of cloud brokering.

Can you describe the difference between Helion OpenStack and the Helion Development Platform?

Helion OpenStack is a distribution of OpenStack built around the current tree of Juno. We don’t go in and swap out core components for HP proprietary stuff. We take the core of OpenStack and then do a whole bunch of work to make it easier to install, patch and configure, because that’s where a lot of the pain points are right now in OpenStack. We also do a lot of security work on it and then run it at very large scale in the HP public cloud to test for reliability. We learn a lot from running OpenStack in a large public cloud environment.

Above that we have the Helion Development Platform, which is a PaaS layer, but think of it as using Cloud Foundry as the host, or the run time, for applications. So it supports all these different languages and you can publish your Java app or node.js app or Python app or Ruby app into that full application lifecycle environment.

Then alongside of that we have a set of application services. So, for example, if someone wants to use database-as-a-service, we have an easy-to-use DB service so a developer can quickly add a database to their app. Behind the scenes we do a binding between that database-as-a-service at the PaaS layer, all the way down into OpenStack’s database-as-a-service offering called Trove. That way we can then offer that database-as-a-service at the development platform layer in a way that’s automatically highly available, and automatically has disaster recovery built in because we’re leveraging the Trove system underneath and providing that resilience to the database behind the scenes.

We’ll do a lot more things like that where we basically illuminate the capabilities inside OpenStack at a higher level for developers to take advantage of. For example, there’s this concept called affinity scheduling inside OpenStack where you can say, assign my VM to a high memory machine or assign these VMs to that data center because that is the only one that’s HIPAA compliant. As that grows in OpenStack, we want to light up that type of capability higher in the platform so it becomes really easy for the developer.

Also, what we use behind the scenes in our Helion development platform is Docker. Every app you build on our Helion development platform instantiates as a Docker container so you can take those Docker containers and assign them wherever you want. We think this Docker + OpenStack combination is going to very powerful.

So, back to your question, they are two different architectural layers. One is targeted at developers, and one is targeted at IT ops. They can be used independently but we’re doing a lot of work to make them better together.

When it comes to use cases for cloud, VMware is positioning its vCloud Air as a natural landing spot for ESX workloads, and Microsoft Azure is a natural spot for Hyper-V and System Center, so where do you see HP being the natural answer?

Because of my Microsoft background I can ask a company what versions of Windows Server and System Center they’re using and I’ll know right away if they’re a Microsoft loyalist or not, and for those customers, the Azure story is compelling. And AWS is definitely the default if you’re a startup and looking for the fastest onramp to getting some compute and storage resources that can scale wide. Where we win are with enterprises that have stepped all the way through the virtualization steps in the past three to four years, companies that have more than 50% of their environment virtualized. Now they’re getting a lot of pressure on being able to go faster.

So what they’re trying to do is take a first step into the cloud, but they are typically encumbered by a tremendous amount of existing IT or security requirements or other business or industry constraints. We have a customer, for example, who just did a few acquisitions, some of which have used public clouds. Their business policy doesn’t allow the use of public clouds so now they have to repatriate those resources back inside their firewall. So we deal with a lot of people who are building private clouds first.

Private cloud on their premise?

Yes. The other big sweet spot for us are service providers and telcos. And there’s a few reasons for that. One, telcos in particular are very open-source oriented. And two, many service providers and telcos are massively threatened by the public cloud vendors. So, if you are a telco or service provider in, let’s say Europe or Asia, Amazon and Google can be really threatening, not just because of their cloud businesses, but because of the whole value chain, all the way down to the device. So they want to ‘OEM’ our public cloud technology because they need to build a competitive offering to an AWS or Google in their markets.

In the enterprise, how critical are network advances such as software defined networking and network function virtualization in supporting this whole hybrid vision?

Frankly, the network is either the enabler or the bottleneck in most cloud deployments because so much of a horizontally scalable distributed system are deeply tethered to network capabilities. So when you start moving to 100 to 1,000 to 10,000 to 100,000 nodes in a system, the network architecture becomes increasingly critical. In our distro of Helion OpenStack we make sure our networking functionality is great upstream in Neutron, which is the network component inside OpenStack, but we also need to be pluggable with other SDN controllers, with VMware NSX, with our own HP SDN, etc. And down the road we’ll have to be pluggable with others that emerge because there won’t be one SDN to rule them all, even though I’m sure some vendors would love to have that control point, but it’s just not realistic.

This is one of the challenges of building commercial open-source products: you have to have as much value as possible without ripping out the flexibility that customers were originally interested in with open source, or without tainting that because it’s very easy to go too far one way or the other where it becomes a Swiss Army Knife. It’s good at a whole bunch of things but not really good at any one thing. Or it goes the other way and becomes extremely proprietary and you kind of lose the reason why you built on open source overall.

One way we’re addressing the specific networking needs for one of our customer segments, communication service providers, is through a partnership with Wind River to integrate their carrier grade technologies into Helion OpenStack. This will provide communications service providers with an open source based cloud platform to meet their demanding reliability requirements and accelerate their transition to NFV deployments. All within our open source model and keeping OpenStack API compatibility.

Are all Helion private clouds based on OpenStack or do you sell some non-OpenStack private clouds as well?

Historically we had a private cloud infrastructure-as-a-service offering called Matrix that was not open source. This was actually before I joined. There are still customers that use that, but over time our plan is to evolve that product with our Helion OpenStack distribution. We will do it in a thoughtful manner so we don’t force customers to rip and replace. But going forward we’ve made a company-wide commitment to OpenStack.

It’s a fundamental bet. We actually got asked once at a very senior meeting, “What’s Plan B if OpenStack doesn’t work out?” I said there is no Plan B. If you have a Plan B, having lived through this at Microsoft, you end up hedging, doing things to secure the option. So you have to go all in if you really want a platform to take off. So it’s a big, fundamental decision for us and a fundamental focus that we have to make OpenStack be what we need it to be for our enterprise customers. There’s not a lot of “let’s sit around and wait for it to evolve.”

There are certainly still some big challenges with OpenStack, but we have many customers who are happily running 100s nodes, many thousands of VMs, in OpenStack for a private cloud and getting great benefit today.

In terms of hypervisor support, do you guys focus on one hypervisor or support a bunch?

At every layer we need to give customers choice. So we support KVM, which is the default people use in most cases, but with this release of our Helion OpenStack we support ESX and very shortly we’ll support Hyper-V.

But at each layer we support choice. At the hardware layer, for example, we support our HP gear but have a certification test for third parties on non-HP gear, and a set of tests and benchmarks we give to third-party OEMs to validate against. We know we’re not going to sell an HP server with every software sale – that’s not reality.

Then even further up the stack we have multiple programming languages and frameworks people can choose from, from Python or Ruby or Java or .NET. That polyglot environment is important for us.

So we’re not only giving customers a choice of where to install and run their cloud, we also give them a lot of choice when it comes to the technology they can use because, at the end of the day, the VMware story is very vertical, the Red Hat story is very vertical, the Microsoft story, even though they talk a lot about open source, is really very vertical. Choice and a platform truly built on open source – that’s a differentiation for us.

If you’re pushing a high-end, enterprise-level story, why on the Helion website are you shouting about price so much? That kind of screams commodity.

As of 2014 less than 10% of enterprise IT is using cloud computing, so the growth opportunity is huge. And when you are trying to fight an early market battle for share, particularly for OpenStack oriented customers, you want to grab as much share as fast as possible.

One of the biggest advantages of a company like HP is we have all sorts of ways we can monetize. We don’t need to sell software at huge margins. We don’t need to sell a server for everything we do. We don’t need to sell services for everything. We have all kinds of ways we can make money through the broad HP. So that gives us a bunch of freedom, actually more freedom than I had at Microsoft because we can do things on every dimension to compete and aggressively grab market share.

And one tool we can use is price. So we can go undercut the other guy because our P&L isn’t solely based on software markets. We certainly compete with other OpenStack distributions like Red Hat. So one of the reasons we’re coming in at the price point we are is because we want to make it zero friction for our customer when they do that comparison of OpenStack distro A versus OpenStack distro B, at every level of comparison.

But, that said, almost everything we do is through a larger enterprise relationship. Typically when an enterprise is buying from HP they’re not making a singular decision for one piece of software or one server order or one set of services. So we talk about the big picture, what our cloud platform can do, how we indemnify our distribution of OpenStack, product capabilities, pricing, the whole thing.

This is really hard when you have a business model that is pegged to one thing like software because you end up between a rock and a hard place because you can’t easily discount below your margin line because it’s very difficult to make that up. Microsoft has a little bit more flexibility because they have such a breadth of software and they have such a breadth of offerings. For Red Hat and VMware it’s a little different because they are bound to their business model, so they have some very hard floors and ceilings in terms of what flexibility they have.

You recently acquired Eucalyptus which doesn’t have big OpenStack roots. They’re mostly about AWS integration. How do you see that fitting in?

Eucalyptus was really two things for us. It was a good collection of people who know how to build cloud software, and it was the AWS interoperability piece. I keep talking about choices, and we realize the design pattern of AWS is hugely relevant. So we needed the ability to tell customers, if you have or are interested in that design pattern, we have a way to support that.

So where we typically see the Eucalyptus demand is where a customer wants to have the ability to move an app out of AWS back to a private or managed cloud environment, or where someone says, I don’t know what’s going to happen yet in terms of going to the public cloud so I’m going to first build my private cloud apps with Eucalyptus and the AWS design pattern (basically meaning using the EC2 APIs, the S3 APIs, etc.), and building it in a way that gives me the flexibility to locate the work where I want.

What should we look for this coming year?

You’ll see us continue to build out our Helion distro of OpenStack and our Helion development platform, so you’ll see new services, new capabilities, that kind of thing. You’ll see us do a lot in the telco/service provider/NFV space.

And later in the year you’ll hear us talk a lot about a new model for enterprises that want to consume managed cloud services but don’t want to buy anything physical, don’t want to own anything anymore, that just want to consume, but in a way that matches their business realities today. We’ll be doing a lot in that space. I’m a believer that the cloud industry we have today is going to look very different in the future as the enterprise really starts adopting cloud technologies – and then all cloud vendors will shape their strategies to fit what enterprises want. So we’re trying to skate to where the puck will be and start to invent some of those new models.

You mentioned that analysts say only 10% of enterprise needs are supported by the cloud today. What’s the timeframe for change?

That’s the multi-trillion dollar question, isn’t it? But I see two enterprise patterns happening right now and this may inform the answer. One is the linear step. I’m going to move from virtualization to private cloud infrastructure-as-a-service, then I’ll try out some of this PaaS stuff to see how that really makes sense. Then I’ll see if I can run that across multiple data centers and then maybe see if a public cloud thing makes sense. So it’s kind of a linear mode.

The other pattern I hear, and this is the riskier one, is where the CIO says any new app inside my enterprise will be built to platform-as-a-service and can have zero knowledge of an operating system underneath it. What they’re trying to do is say, let’s start building in the new cloud-native model so we don’t have to worry about migrations and lift and shift and all of that.

But then there’s another question, and that is, which platform-as-a-service? At some point you’re binding to something, you’re making some commitment to some API somewhere. It may not be at the operating system level anymore. It may be higher up the stack in the middleware.

Then frequently we see customers say, we won’t move our existing resources to a cloud model. We’ll build the next project or the next deployment in a true cloud model. We’ll build that as a stand-alone system and then try to bridge back, usually through management tools, to the old. That is very common as well.


 

Cisco CCNA Training, Cisco CCNA Certification

for more info on HP Training and HP Certification and more log in to Certkingdom.com

Posted in HP | Tagged , , , , , , | Leave a comment

How to set up 802.1X client settings in Windows

802.1X provides security for wired and Wi-Fi networks

Understanding all the 802.1X client settings in Windows can certainly help during deployment and support of an 802.1X network. This is especially true when manual configuration of the settings is required, such as in a domain environment or when fine-tuning wireless roaming for latency-sensitive clients and applications, like VoIP and video.

An understanding of the client settings can certainly be beneficial for simple environments as well, where no manual configuration is required before users can login. You still may want to enable additive security measures and fine-tune other settings.

Though the exact network and 802.1X settings and interfaces vary across the different versions of Windows, most are quite similar between Windows Vista and Windows 8.1. In this article, we show and discuss those in Windows 7.

+ ALSO ON NETWORK WORLD: WHAT IS 802.1X +
Protected EAP (PEAP) Properties

Let’s start with the basic settings for Protected EAP (PEAP), the most popular 802.1X authentication method.
111714 network connection dialog

On a Network Connection’s Properties dialog window you can access the basic PEAP settings by clicking the Settings button.

Next, you move through the settings on this PEAP Properties dialog window.

Validate server certificate: When enabled, Windows will try to ensure the authentication server that the client uses is legitimate before passing on its login credentials. This server certificate validation tries to prevent man-in-the-middle attacks, where someone sets up a fake network and authentication server so they can capture your login credentials.

By default, server certificate validation is turned on and we certainly recommend keeping it enabled, but temporarily disabling it can help troubleshoot client connectivity issues.

Connect to these servers: When server certificate validation is used, here you can optionally define the server name that should match the one identified on the server’s certificate. If matching, the authentication process proceeds, otherwise it doesn’t.

Typically, Windows will automatically populate this field based upon the server certificate used and trusted the first time a user connects.

Trusted Root Certification Authorities: This is the list of certification authority (CA) certificates installed on the machine. You select which CA the server’s certificate was issued by, and authentication proceeds if it matches.

Typically, Windows will also automatically choose the CA used by the server certificate the first time a user connects.

Do not prompt user to authorize new servers or trusted certification authorities: This optional feature will automatically deny authentication to servers that don’t match the defined server name and chosen CA certificate. When this is disabled, users would be asked if they’d like to trust the new server certificate instead, which they likely won’t understand.

We recommend this additive security as well. It can help users from unknowingly connecting to a fake network and authentication server, falling victim to a man-in-the-middle attack. Unlike the two previous settings, you must manually enable this one.

The next setting is where you choose the tunneled authentication method used by PEAP. Since Secured password (EAP-MSCHAP v2) is the most popular, we’ll go through it. Clicking the Configure button shows one setting for EAP-MSCHAP v2: Automatically use my Windows logon name and password (and domain if any).
111714 geier eap mschap

This is the dialog box you see after clicking the Configure button for the EAP-MSCHAP v2 authentication method.

This should only be enabled if your Windows login credentials match those in the authentication server, for instance if the server is connected to Active Directory. After connecting to an 802.1X network for the first time, Windows should automatically set this appropriately.

Back on the PEAP Properties dialog window, under the authentication method, are four more settings:

Enable Fast Reconnect: Fast Reconnect, also referred to as EAP Session Resumption, caches the TLS session from the initial connection and uses it to simplify and shorten the TLS handshake process for re-authentication attempts. Since it helps prevent clients roaming between access points from having to do full authentication, it reduces overhead on the network and improves roaming of sensitive applications.

Fast Reconnect is usually enabled by default when a client connects to an 802.1X network that supports it, but if you push network settings to clients you may want to ensure Fast Reconnect is enabled.

Enforce Network Access Protection: When enabled, this forces the client to comply with the Network Access Protection (NAP) policies of a NAP server setup on the network. For instance, NAP can restrict connections of clients that don’t have antivirus, a firewall, the latest updates, or other health related vulnerabilities.

Disconnect if server does not present cryptobinding TLV: When manually enabled, this requires the server use cryptobinding Type-Length-Value (TLV), otherwise the client won’t proceed with authentication. For RADIUS servers that support cryptobinding TLV, it increases the security of the TLS tunnel in PEAP by combining the inner method and the outer method authentications so that attackers cannot perform man-in-the-middle attacks.

Enable Identify Privacy: When using tunneled EAP authentication (like PEAP), the username (identity) of the client is sent twice to the authentication server. First, it’s sent unencrypted, called the outer identity, and then inside an encrypted tunnel, called the inner identity. In most cases, you don’t have to use the real username on the outer identity, which prevents any eavesdroppers from discovering it. However, depending upon your authentication server you may have to include the correct domain or realm.

This setting is disabled by default and I recommend manually enabling it. After enabling identify privacy, you can type whatever you want as the username, such as “anonymous”. Alternatively, if the domain or realm is required: “anonymous@domain.com”.
Advanced 802.1X Settings

On a Network Connection’s Properties dialog window you can access advanced settings by clicking the Advanced Settings button.
111714 geier advanced 8021x

The first tab is the advanced 802.1X settings.
On the 802.1X Settings tab, you can specify the authentication mode: User, Computer, User or Computer, or Guest authentication.

User authentication will use only the credentials provided by the user, while Computer authentication uses only the computer’s credentials. Guest authentication allows connections to the network that are regulated by the restrictions and permissions set for the Guest user account.

Using the combined User or Computer authentication option allows the computer to log into the network before a user logs into Windows and then also enables the user to login with their own credentials afterward. This enables, for instance, the ability to use 802.1X within a domain environment, as the computer can connect to the network and domain controller before a user actually logs into Windows.

When User only authentication is used, you can click the Save Credentials button to input the username and password. Additionally, you can remove saved credentials by marking the Delete credentials for all users checkbox.

The second section of the 802.1X Settings tab is where you can enable and configure Single Sign On functionality. If the system and network are set up properly, using this feature eliminates the need to provide separate login credentials for Windows and 802.1X. Instead of having to input a username and password during the 802.1X authentication, it uses the Windows account credentials. Single sign-on (SSO) features save time for both users and administrators and help to create an overall more secure network.

Advanced 802.11 Settings

On the Advanced Settings dialog box you’ll see an 802.11 settings tab if WPA2 security is used. First are the Fast Roaming settings:
111714 geier advanced 80211

The second tab on the Advanced Settings window is the advanced 802.1X settings.
Enable Pairwise Master Key (PMK) Caching: This allows clients to perform a partial authentication process when roaming back to the access point the client had originally performed the full authentication on. This is typically enabled by default in Windows, with a default expiration time of 720 minutes (12 hours).

This network uses pre-authentication: When both the client and access points supports pre-authentication, you can manually enable this setting so the client doesn’t have to perform a full 802.1X authentication process when connecting or roaming to new access points on the network. This can help make the roaming process even more seamless, useful for sensitive clients and traffic, such as voice and video. Once a client authenticates via one access point, the authentication details are conveyed to the other access points. Basically it’s like doing PMK caching with all access points on the network after connecting to just one.

Enable Federal Information Processing Standard (FIPS) compliance for this network: When manually enabled, the AES encryption will be performed in a FIPS 140-2 certified mode, which is a government computer security standard. It would make Windows 7 perform the AES encryption in software, rather than relying on the wireless network adapter.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Microsoft | Tagged , , , | Leave a comment

Hey Samsung: Not everybody has to be a platform

It’s easy to see why everybody wants to be a platform these days. Just look at Apple: By owning both the hardware and the operating system, it gets total control over what developers build on its platform — and a sizable cut of the revenues besides. In return, developers get an unmatched distribution channel directly to customers’ devices. As Apple extends to new devices, those developers get to come along.

So it’s no wonder that Samsung, eternally defining itself by its struggles with Apple, wants to be a platform, too, especially in the face of shrinking profits. On paper, it seems so simple: Samsung has the hardware business. It’s making some wearables, investing in a connected home business with the SmartThings acquisition, and getting into virtual reality.

Open some APIs, give out some SDKs, talk about “open” and host a big-time developer conference in San Francisco (as in, the Samsung Developer Conference I write this from) to make sure everybody knows how committed you are.

But what Samsung is lacking, what major platform providers have in spades, is something harder to pin down, and much harder to imitate. Apple, Salesforce, even Microsoft lately, have demonstrated that most vague, but most important notion. They have vision — a clear and present mission that drives them forward, even when that path isn’t immediately obvious.

But Samsung? Samsung has really good phones and some solid tablets and a partnership with Oculus and SmartThings and now Project Beyond, a super nifty 360-degree streaming 3D high definition camera. But in the entire two-hour keynote session this morning, attendees were treated to a rapid-fire string of previously announced non-news like the Simband open health wearable platform (now open for developer sign-ups), a demo of what’s possible with SmartThings and a reaffirmation that the company will keep investing in Samsung Knox, its enterprise workspace feature.

Other than the virtual reality stuff, and the Project Beyond camera, which are actually, really, very cool, it’s mostly a lot of the same old. The only “new” thing coming to Samsung devices is Samsung Flow, a me-too take on Apple’s cross-device Continuity features. Other than that, the company was just trying to show developers that products exist and can be built upon without offering a tremendously compelling case for why. It’s not really leadership material.

When Apple is selling watches, Google is selling Nest thermostats, and Microsoft is revamping Windows for the multi-device future, Samsung’s follow-along mentality of “just add developers” just doesn’t seem like enough, no matter how many sensors it adds to Simband.

(The company’s technical keynote takes place Thursday; maybe there’ll be something more impressive that will change my mind. But I doubt it.)

The point here is that Samsung is a hardware company, in so many ways. It’s succeeded in the first place by making devices that people actually want to use. And part of how it got there was by being part of somebody else’s ecosystem. And yeah, it must chafe those at Samsung corporate command to have Google to thank for the success of the Galaxy S line of phones. But maybe, just maybe, throwing your support behind an operating system that nobody asked for, wants, needs or supports (Tizen) wasn’t the right answer, no matter how technologically proficient it is.

And in the same way people ask whether Microsoft’s hardware business is good for Microsoft’s vision as a service provider, they have to also ask whether this whole insistence on being a software provider is good for Samsung’s business. Nobody seems excessively jazzed about developing for the Samsung-backed Tizen ecosystem in a world where Android and iOS are already pretty well standardized.

“Ecosystem” is just a fancy word for building the stuff that users, not corporations, want. Rather than controlling everything, maybe a renewed focus on being the best part of the Android ecosystem — and on making what customers actually want — would do Samsung good.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , | Leave a comment

Cisco patches serious vulnerabilities in small business RV Series routers

The flaws allow attackers to execute commands, overwrite files and launch CSRF attacks

Cisco Systems released patches for its small business RV Series routers and firewalls to address vulnerabilities that could allow attackers to execute arbitrary commands and overwrite files on the vulnerable devices.

The affected products are Cisco RV120W Wireless-N VPN Firewall, Cisco RV180 VPN Router, Cisco RV180W Wireless-N Multifunction VPN Router, and Cisco RV220W Wireless Network Security Firewall. However, firmware updates have been released only for the first three models, while the fixes for Cisco RV220W are expected later this month.

ALSO: Celebrating 25 years of Cisco Networkers

One of the patched flaws allows an attacker to execute arbitrary commands as root — the highest privileged account — through the network diagnostics page in a device’s Web-based administration interface. The flaw stems from improper input validation in a form field that’s supposed to only allow the PING command. Its exploitation requires an authenticated session to the router interface.

A second vulnerability allows attackers to execute cross-site request forgery (CSRF) attacks against users who are already authenticated on the devices. Attackers can piggyback on their authenticated browser sessions to perform unauthorized actions if they can trick those users to click on specially crafted links.

This vulnerability also provides a way to remotely exploit the first flaw. Researchers from Dutch security firm Securify, who found both issues, published a proof-of-concept URL that leverages the CSRF flaw to inject a command through the first vulnerability that adds a rogue administrator account on the targeted device.

A third security flaw that was patched by Cisco allows an unauthenticated attacker to upload files to arbitrary locations on a vulnerable device using root privileges. Existing files will be overwritten, the Securify researchers said.

Cisco released firmware versions 1.0.4.14 for the RV180 and RV180W models and firmware version 1.0.5.9 for the RV120W.

Users can limit the exposure of their devices to these flaws by not allowing remote access from the Internet to their administrative interfaces. If remote management is required, the Web Access configuration screen on the devices can be used to restrict access only to specific IP addresses, Cisco said in its advisory.


 

MCTS Training, MCITP Trainnig

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

 

 

Posted in Cisco | Tagged , , , , , , , | Leave a comment

IDC: Public cloud to be $127B industry by 2018

Public IaaS market to grow 6x faster than IT market

Research firm IDC’s latest estimate pegs the public IT cloud market at $56.6 billion this year, and it’s expected to grow to a $127 billion market within four years.

The public cloud computing market is still in the early stages of adoption, with rapid growth forecasted in the upcoming years. IDC predicts the cloud market to grow at a compound annual rate of 22.8% each year, which is six times faster than the growth in the overall IT market. By 2018, IDC expects that cloud spending will account for half of software, server and storage spending growth.

The SaaS market is the leader in the cloud, making up 70% of current cloud spending. The IaaS market is the second-largest, while the platform as a service (PaaS) market is the fastest growing, but smallest major segment of the market, IDC says.

One factor that IDC expects will help encourage cloud computing adoption will be the rise of industry-specific cloud offerings. Having cloud computing services tailor-made for specific vertical industries – which is the idea of a “community cloud” will help appeal cloud services more specifically to businesses. “Many of these new solutions will be in industry-focused platforms with their own innovation communities, which will reshape not only how companies operate their IT, but also how they compete in their own industry,” IDC’s Chief Analyst Frank Gens says in new report out today.

The market has already seen some industry-specific customization of cloud services. Cloud providers like Amazon, Microsoft and Verizon have separate cloud IaaS offerings tailored for government workloads, for example. In the SaaS industry, customization to specific vertical industries is becoming more common as well from companies like Salesforce.com.


 


Best comptia A+ Video Training, Comptia Network+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , | Leave a comment

BYOD forces users’ personal information on help desk

BYOD forces users’ personal information on help desk

Help desk staffers can be caught in the middle when BYOD users get verrrry personal with their devices.

As the recent scandal over leaked celebrity photographs reminded us all, people use their electronic devices for very personal pursuits in the era of smartphone ubiquity. Depending on the age and inclination of its owner, a modern-day digital device might contain not just nude selfies like those that were shared online, but images from dating sites like Tinder and Grindr, creepshots, or other salacious or even illegal material downloaded from the backwaters of “the dark Web” via anonymizers like Tor.

As blogger Kashmir Hill summed up as the selfie scandal was unfolding, “Phones have become sex toys.”

If that’s true, then those toys are making their way into the workplace in record numbers, thanks to the ever-increasing number of organizations adopting bring-your-own-device (BYOD) policies.

In a perfect world, none of this should concern help desk employees — with a well-executed mobile management program in place that incorporates containerization, a technician ought to be able to assist employees with corporate apps and data without encountering so much as a pixel of not-safe-for-work (NSFW) material.

But the world isn’t always perfect, as IT support staffers know perhaps more than most. Which means they can find themselves looking not just at enterprise applications but at private images and texts they’d really rather not see. Or politely pointing out to an employee who’s synced all her devices to the cloud that pictures from her honeymoon are currently being displayed on the conference room’s smartboard. Or repeatedly removing viruses picked up by the same users visiting the same porn sites.

The scope of the problem

In a survey published last year by software vendor ThreatTrack Security, 40% of tech support employees said they’d been called in to remove malware from the computer or other device of a senior executive, specifically malware that came from infected porn sites. Thirty-three percent said they had to remove malware caused by a malicious app the executive installed. Computerworld checked with several security experts, none of whom was particularly surprised by that statistic.

The ThreatTrack survey didn’t tease out how much of this was on BYODs. But in a February 2014 survey by consulting firm ITIC and security training company KnowBe4, 34% of survey participants said they either “have no way of knowing” or “do not require” end users to inform them when there is a security issue with employee-owned hardware. Some 50% of organizations surveyed acknowledged that their corporate and employee-owned BYOD and mobile devices could have been hacked without their knowledge in the last 12 months. “BYOD has become a big potential black hole for a lot of companies,” says Laura DiDio, ITIC principal analyst.

One big concern: As McAfee Labs warns in its 2014 Threat Predictions report, “Attacks on mobile devices will also target enterprise infrastructure. These attacks will be enabled by the now ubiquitous bring-your-own-device phenomenon coupled with the relative immaturity of mobile security technology. Users who unwittingly download malware will in turn introduce malware inside the corporate perimeter that is designed to exfiltrate confidential data.”

Today’s malware from porn sites is usually not the kind of spyware that’s dangerous to enterprises, says Carlos Castillo, mobile and malware researcher at McAfee Labs — but that could change. “Perhaps in the future, because of the great adoption of BYOD and people using their devices on corporate networks, malware authors could . . . try to target corporate information,” he says.

In fact, a proof-of-concept application was recently leaked that is designed to target corporate data from secure email clients, Castillo says. The software used an exploit to obtain root privileges on the device to steal emails from a popular corporate email client, alongside other spyware exploits like stealing SMS messages. “While we still have not seen malware from porn sites that is dangerous to enterprises,” Castillo says, “this leaked application could motivate malware authors to use the same techniques using malicious applications potentially being distributed via these [porn] sites.”

Beyond security, there could be legal liabilities in play as well, some analysts caution. For example, a corporation might be liable if an IT staffer saw evidence of child porn on a phone.

To be sure, porn sites cause only a small fraction of the problems that users introduce into the enterprise. According to Chester Wisniewski, senior security advisor at Sophos, some 82% of infected sites are not suspicious places like porn sites, but rather sites that appear benign. And for smartphones, the biggest malware danger is from unsanctioned apps, not NSFW sites, he says.

Roy Atkinson, a senior analyst at HDI, a professional association and certification body for the technical service and support industry, sees no evidence of a widespread problem. When he specifically asked a couple of IT professionals who are responsible for mobile management in their organization, “they told me either ‘we don’t see it’ or ‘we make believe we don’t see it,'” says Atkinson. “People don’t really want to think about this or talk about it much.”

Escalate or let it go?

Whatever the frequency, when and if NSFW issues do arise, the IT department often winds up functioning as a “first responder” that has to decide whether to escalate the incident or let it go. “If somebody complains about [a co-worker] displaying pictures on their smartphone at a meeting . . . then the company’s acceptable use policy will come into play,” says Atkinson. Or if IT employees find malware that came from a porn site and could endanger the network, they may say something — to the employee or to a manager. “But as we know, policies are enforced somewhat arbitrarily,” Atkinson says.

Barry Thompson, network services manager at ENE Systems, a $37-million energy management and HVAC controls company in Canton, Mass., says he has seen problems increase because of what he calls “bring your own connection.” People assume “that it’s their personal phone so they can do as they like,” he says. But they are using the office Wi-Fi network, which Thompson monitors. He can see every graphic that passes through the network. “If I notice pictures of naked people, I can click on it and find out who’s looking at that,” he says. When that happens, Thompson usually gives a warning on first offense. If it happens again, he brings in the employee’s supervisor.

It’s like the Wild West out there if it’s the employee’s own device. — Dipto Chakravarty, ThreatTrack Security

“It’s like the Wild West out there if it’s the employee’s own device,” says Dipto Chakravarty, executive vice president of engineering and products at ThreatTrack Security. Companies have a hard time enforcing their policies on BYOD devices, because it is, after all, the employee’s device.

Often, the “old boy network” kicks in. The user “is petrified that IT will see all these bad sites that the user has visited,” says Chakravarty. Employees admit they made a mistake and ask IT to please ignore the material. “IT doesn’t really want to see the dirty laundry, so they say, ‘Hey, no problem. I’ll just wipe it clean and you’re good to go,'” he says. “That’s the norm.”

The tendency to “cover for your buddies — guys have been doing that for time immemorial,” says Robert Weiss, senior vice president of clinical development with Elements Behavioral Health and a sex addiction expert. But there are social and ethical concerns for both the employee and for IT, says Weiss, co-author of the 2014 book, Closer Together, Further Apart: The Effect of Digital Technology on Parenting, Work and Relationships.

What happens, asks Weiss, when IT sees photos of naked children on someone’s phone, which could be child porn, or repeatedly removes malware from porn sites from the same user’s device, which could indicate an addiction? IT staffers are typically not well equipped to address criminal or addictive behaviors.

Weiss thinks there should be clear policies that indicate when IT needs to report such information to human resources, similar to policies about repeated drinking or signs of other addictions, and let HR take it from there. “The IT person should not be involved,” he says. “I would not want to put the IT person in the position of having to talk about sex with an employee that they don’t particularly know well.”

I would not want to put the IT person in the position of having to talk about sex with an employee that they don’t know well. — Robert Weiss, Elements Behavioral Health

At least one technical analyst, who has worked in IT support at a range of companies, thinks reporting such users to HR is taking it too far. Flagging child pornography is one thing, he says, but addiction? “I’m not going to HR about BYOD riddled with porn. It’s their device. As much as I love helping people, their personal porn habits, even at an addiction level, are not my problem. Unless it’s criminal, I don’t care.”

Protecting IT from users

The ideal fix is to create a corporate container to hold all business applications, including corporate email and Internet browsing.

And the best way to achieve that goal is with the emerging class of enterprise mobility management (EMM) technology, says Eric Ahlm, a research director at Gartner. “When properly configured, EMM solutions create a corporate container that provides OS-level security and isolates apps and data in the container from what’s outside,” explains Ahlm. The corporate container can encompass email applications, Web browsers, customer mobile applications and off-the-shelf mobile applications. Within that container, IT can create isolated data-sharing and -protection policies, or easily deploy more mobile apps, or remove them — all without touching the personal information outside of the container, he explains. “It makes all those issues go away.”

On the personnel management side of the equation, companies should be sure to update their acceptable use policies to include BYOD. ENE’s Thompson found that his company’s acceptable use policy did not mention personally owned devices. So last year, says Thompson, ENE amended the policy to specify that “any use of corporate resources or systems, regardless of ownership of the devices, obligates the user to comply with the corporate acceptable use policy.”


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

 

Posted in Tech | Tagged , , | Leave a comment

Virtual reality gains a small foothold in the enterprise

Prototypes and simulations based on virtual reality can save companies millions.

The rapid growth of the mobile sector has had an unexpected dividend – by bringing down the costs and improving the quality of motion sensors, screens, and processors it has helped usher in a new era of virtual reality technology.

Systems previously available only to largest manufacturers or to the military can now be put together with consumer-grade technology at a fraction of the price, and companies are already taking advantage of the opportunities.

When it comes to virtual reality, one of the biggest bangs for the buck is in virtual prototypes. Virtual models of buildings, oil tankers, factory floors, store shelves or cars can now be uploaded into a virtual environment and examined by safety inspectors, designers, engineers, customers and other stakeholders.

The Ford Motor Company, for example, has long been using virtual reality when it comes to prototypes and simulations, but the new wave of virtual reality technology is dramatically expanding its reach.

Ford’s Immersive Virtual Environment lab, one of several areas in which Ford uses virtual reality, for example, has recently added the Oculus Rift virtual reality headset to its virtual reality platforms.

It’s used in combination with a shell of a car where the seat, steering wheel, and other parts can be repositioned to match those of a prototype car.

“If you look at it, you’d think it was a very stripped-down vehicle,” says Elizabeth Baron, who heads up the lab. But when engineers sit down in the driver’s seat and put on virtual reality headsets, they’re virtually transported into the interior of the prototype.

Elizabeth Baron shows how Ford uses Oculus Rift.

“You have a gas pedal, brakes, steering wheel, a door, and when you’re touching stuff, it’s real,” Baron says. “But when you’re looking around, you’re seeing the virtual data. That’s where the Oculus is specially useful.”

The Oculus Rift is the head-mounted virtual reality display that ushered in the current age of virtual reality with a $2.4 million Kickstarter campaign in 2012, followed by a jaw-dropping $2 billion buyout by Facebook earlier this year.

The Oculus Rift hasn’t officially hit the market yet, but developer kits are available from the company for $350 each and more than 100,000 have already been sold. The device combines a high-resolution screen, motion sensors, and a set of lenses. The motion sensors track where the user is looking and the lenses stretch out the screen so it covers most of the user’s field of view. The result is a very convincing illusion that the wearer has been transported into a virtual world.

“I’m extremely excited about the developments in the headspace scene and the work Oculus has done to bring low cost, wide-field of view to the market,” Baron says. “I’m just over the moon about it. The good thing for Ford is, with our approach for using different display technologies, we’re already ready to take advantage of the developments that come out of the virtual headset space.”

Another virtual reality system is a CAVE (computer assisted virtual environment), which is a room with large screens on three walls and on the ceiling. Users wear stereoscopic glasses for a holodeck-like effect – life-size, 3D images of objects appear in the middle of the room, so that engineers can walk around and examine them.

Another system allows users to walk around inside a large open space while it tracks their position. “We can put an F-250 [super duty truck] into that environment and you can walk around it like it’s a life-sized vehicle,” Baron says. “It’s like an inspection tool for what we’re producing and what our customers might take delivery of. That’s a really important aspect in our product development process.”

A virtual environment allows engineers to dial up different lighting settings, to see how the exterior would look at noon on a hazy day, or in the evening or under mercury vapor lights. Virtual environments also help enable long-distance collaboration, she says.

“We also have a virtual space in Australia, and if they’re immersed and we’re immersed at the same time, we can see where they are in the virtual environment and we can talk to each other,” she says. “We can say, ‘Look at this, look at that.’”

And virtual reality allows the company to look at many more prototypes than would have been possible if they had to be actually built.

“There is no way we could build thousands of prototypes,” she says. “We would only be able to build a handful. But also, there is no way we could check in the physical world all the things we check in the virtual worlds. We can make intelligent decisions about our design, with respect to how we manufacture it, and that’s a huge time save and cost save.”

Ford is expanding its use of virtual reality, she adds. “We’re actually creating another virtual space here in Dearborn [Michigan] to handle the overflow,” she says. “We’re so packed. We can’t fit in what we can do in one day. It’s been shown to be so valuable.”

Ford also uses virtual reality for manufacturing assembly simulations, to help ensure the health and safety of workers, for training, and to study how drivers behave.

“We have driving simulations, another virtual reality application, where we’ll bring in people who haven’t slept all night and ask them to perform some tasks,” she says. “And then perform an analysis on how they respond versus someone who’s had their fresh cup of coffee and they’re bright and cheerful in the morning.”

Other manufacturing companies are also upgrading their virtual prototypes from simple 3D graphics on a monitor to fully immersive virtual reality systems such as those made possible by the Oculus Rift and similar devices.

Medical device companies, for example, are among the early adopters, says Jeremy Duimstra, a professor of user experience at University of California San Diego and CEO and creative director at San Diego-based MJD Interactive, which counts Disney, Red Bull, P&G and Titleist among its clients.

“Being able to virtually interact with a device in the design phase, without having to build physical objects … allows for more innovation,” he says.

Plus, there’s the cost savings of materials and manpower of physically mocking up hundreds of prototypes. “Build the product virtually, test it, iterate, and only build when you know it’s right,” he says.

Jeremy Duimstra
Environments that are physically dangerous for people are also ripe for going virtual.

“Our oil and gas clients are definitely interested in this space,” says Mary Hamilton, who heads up the digital experiences research and development group at Accenture. Immersive virtual reality allows people who might be in different locations to visit a difficult-to-reach facility, to get views such as X-rays or schematic views that might be impossible in real life, and enables low-risk, lower-cost training for new employees.

Marketing applications are also expanding, she says.
For example, low-cost head-mounted displays will allow retailers to replace their immersive CAVE environments – which can cost hundreds of thousands of dollars to set up. Companies can use the technology to have focus groups walk through virtual stores, interact with different shelf layouts, or even try out new products.

“It would significantly lower costs, allow companies to do more of this, and allow them to do it in multiple locations,” she says.

The second wave
One virtual reality wave has already come and gone, in the 1990s. Movies like “The Lawnmower Man,” devices like Nintendo’s Virtual Boy and virtual reality arcades made the technology hot, but by the time “The Matrix” came out at the end of the decade it was clear that virtual reality technology was too expensive and too bulky for widespread use. In addition, graphics quality was poor and high latency and poor head-tracking combined to make users nauseous.

As a result, virtual reality became limited to high-end, narrowly focused applications such as military simulations, movie special effects, and training and simulations in manufacturing, oil, and the medical industries, says Jacquelyn Ford Morie, formerly a virtual reality expert at the University of Southern California’s Institute for Creative Technologies. Virtual reality immersion therapy has been used for a decade now to treat Post Traumatic Stress Disorder, and to manage the pain of burn victims.

“Now we have this second wave of virtual reality,” says Morie. “The difference between then and now is that it’s affordable. Instead of a $30,000 head-mounted display, you now have a $300 head-mounted display.”

jacquelyn ford morie
Jacquelyn Ford Morie is founder and chief scientist at All These Worlds Inc., a Los Angeles-based virtual environment consulting and development firm.

The general population is also more used to technology than they were 20 years ago, she adds, and there are more companies creating content for the new virtual reality platforms. Her own company creates applications in virtual worlds for NASA and other enterprise clients.

“We’re doing things like making virtual worlds that will help astronauts on long-duration space flight missions,” she says.

Today, most enterprise virtual reality is internally focused, she says. That is likely to change as more of this technology gets into the hands of consumers, and she’s looking forward to working on consumer-focused projects.

“If everyone has a 3D head-mounted display, there’s no reason not to feed a preview of that new product,” she says. “Create emotionally evocative, 3D immersive ads, so all of a sudden they feel like they’re on the mountain, about to ski down with my new snowboard.”


 

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , | Leave a comment

Apple iPad Air 2 is thinner and speedier than its predecessors

Apple’s iPad Air 2 is thinner and lighter than its predecessor, and should be speedier as well, thanks to a new processor.

It also has improved camera and security features, as does the iPad Mini 3, Apple said Thursday during an event at it’s Cupertino, California, campus, unveiling the tablets at a time when the company’s dominance in that market has waned.

The iPad Air 2, which has a 9.7-inch screen, is 6.1 millimeter thick, which is 18 percent thinner than the iPad Air. The Air 2 offers 10 hours of battery life.

The tablet has the all-new A8X chip, which is a variant of the A8 chip in the iPhone 6 and iPhone 6 Plus. The chip is 40 percent faster and provides 2.5 times better graphics than the A7 chip in the iPad Air.

“We’re able to deliver console-level graphics in your hand,” said Phil Schiller, Apple senior vice president of worldwide marketing.

Other features include an 8-megapixel iSight rear camera and a FaceTime front camera. The iSight camera can take 1080p video that can be manipulated in multiple modes such as slow motion and time lapse.

Image and video manipulation tools such as Pixelmator and Replay will help users edit, repair and manipulate images, taking advantage of the faster graphics processor.

The iPad Mini 3 has a 7.9-inch screen, an A7 chip, a 5-megapixel iSight rear camera, a FaceTime camera and 802.11ac wireless.

Both tablets have the Touch ID fingerprint sensor, which lets users bypass passwords when logging into a smartphone or buying things online. The fingerprint technology is used with the Apple Pay payment system.

The iPad Air 2 is priced at US$499 for 16GB and Wi-Fi storage, $599 for 64GB and $699 for 128GB. A version of the tablet with cellular connectivity is $130 more.

The iPad Mini 3 is priced at $399 for 16GB, $499 for 64GB and $599 for 128GB.

Both tablets can be ordered now, with shipping set for next week.

The tablets are hitting the market at a time when Android tablet makers Samsung, Lenovo and Asus are gaining ground on Apple. Apple’s tablet shipments declined 9.3 percent during the second quarter of 2014 compared to the same quarter last year, while overall worldwide tablet shipments went up 11 percent, according to IDC.

Apple faces further challenges as more users opt for larger-screen smartphones and hybrid devices instead of tablets. The iPhone 6 Plus, which has a 5.5-inch screen, is off to a hot start, and could hurt iPad sales. And Google’s Nexus 9, the first 64-bit Android tablet, starts shipping next month.

IDC is projecting overall worldwide tablet shipments to grow by just 6.5 percent this year.

But Apple CEO Tim Cook put a—predictably—positive spin on the situation at the event, noting that the company has sold 225 million tablets.

“We’ve sold more iPads in the first four years than any product in our history,” Cook said.


 

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Apple | Tagged | Leave a comment

Startup proposes fiber-based Glass Core as a bold rethink of data center networking

Software Defined Networking (SDN) challenges long held conventions, and newcomer Fiber Mountain wants to use the SDN momentum to leap frog forward and redefine the fundamental approach to data center switching while we’re at it. The promise: 1.5x to 2x the capacity for half the price.

How? By swapping out traditional top of rack and other data center switches with optical cross connects that are all software controlled. The resultant “Glass Core,” as the company calls it, provides “software-controlled fiber optic connectivity emulating the benefits of direct-attached connectivity from any port … to any other server, storage, switch, or router port across the entire data center, regardless of location and with near-zero latency.”

The privately funded company, headed by Founder and CEO M. H. Raza, whose career in networking includes stints at ADC Telecommunications, 3Com, Fujitsu BCS and General DataComm, announced its new approach at Interop in New York earlier this week. It’s a bold rethinking of basic data center infrastructure that you don’t see too often.

“Their value proposition changes some of the rules of the game,” says Rohit Mehra, vice resident of network infrastructure at IDC. “If they can get into some key accounts, they have a shot at gaining some mind share.”

Raza says the classic approach of networking data center servers always results in “punting everything up to the core” – from top of rack switches to end of row devices and then up to the core and back down to the destination. The layers add expense and latency, which Fiber Mountain wants to address with a family of products designed to avoid as much packet processing as possible by establishing what amounts to point-to-point fiber links between data center ports.

“I like to call it direct attached,” Raza says. “We create what we call Programmable Light Paths between a point in the network and any other point, so it is almost like a physical layer connection. I say almost because we do have an optical packet exchange in the middle that can switch light from one port to another.”

That central device is the company’s AllPath 4000-Series Optical Exchange, with 14 24-fiber MPO connectors, supporting up to 160×160 10G ports. A 10G port requires a fiber pair, and multiple 10G ports can be ganged together to support 40G or 100G requirements.

The 4000 Exchange is connected via fiber to any of the company’s top-of-rack devices, which are available in different configurations, and all of these devices run Fiber Mountain’s Alpine Orchestration System (AOS) software.

That allows the company’s homegrown AOS SDN controller, which supports OpenFlow APIs (but is otherwise proprietary), to control all of the components as one system. Delivered as a 1U appliance, the controller “knows where all the ports are, what they are connected to, and makes it possible to connect virtually any port to any other port,” Raza says. The controller “allows centralized configuration, control and topology discovery for the entire data center network,” the company reports, and allows for “administrator-definable Programmable Light Paths” between
How do the numbers work out? Raza uses a typical data center row of 10 racks of servers as the basis for comparison. The traditional approach;

Each rack typically has two top-of-rack switches for redundancy, each of which costs about $50,000 (so $100,000/rack, or $1 million per row of 10 racks).
Each row typically has two end-of-row switches that cost about $75,000 each (another $150,000)
Cabling is usually 5%-10% of the cost (10% of $1.15 million adds $115,000)
Total: $1.265 million

With the Fiber Mountain approach:
Each top-of-rack switch has capacity enough to support two racks, so a fully redundant system for a row of 10 racks is 10 switches, each of which cost $30,000. ($300,000).
The 4000 series core device set up at the end of an isle costs roughly $30,000 (and you need two, so $60,000).
Cabling is more expensive because of the fiber used, and while it wouldn’t probably be more than double the expense, for this exercise Raza says to use $300,000.

Total $660,000. About half, and that doesn’t include savings that would be realized by reducing demands on the legacy data center core now that you aren’t “punting everything up” there all the time.

What’s more, Raza says, “besides lower up front costs, we also promise great Opex savings because everything is under software control.”

No one, of course, rips out depreciated infrastructure to swap in untested gear, so how does the company stand a chance at gaining a foothold?

Incremental incursion.
Try us in one row, Raza says. Put in our top-of-rack switches and connect all the server fibers to that and the existing top-of-rack switch fibers to that, and connect our switches to one of our cores at the end of the isle. “Then, if you can get somewhere on fiber only, you can achieve that, or, if you need the legacy switch, you can shift traffic over to that,” he says.

Down the road, connect the end of isle Glass Core directly to other end of row switches, bypassing the legacy core altogether. The goal, Raza says, is to direct connect racks and start to take legacy switching out.

While he is impressed by what he sees, IDC’s Mehra says “the new paradigm comes with risks. What if it doesn’t scale? What if it doesn’t do what they promise? The question is, can they execute in the short term. I would give them six to 12 months to really prove themselves.”

Raza says he has four large New York-based companies considering the technology now, and expects his first deployment to be later this month (October 2014).


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , , , | Leave a comment

States worry about ability to hire IT security pros

States’ efforts to improve cybersecurity are being hindered by lack of money and people. States don’t have enough funding to keep up with the increasing sophistication of the threats, and can’t match private sector salaries, says a new study.

This just-released report by Deloitte and the National Association of State CIOs (NASCIO) about IT security in state government received responses from chief information security officers (CISOs) in 49 states. Of that number, nearly 60% believe there is a scarcity of qualified professionals willing to work in the public sector.

Nine in 10 respondents said the biggest challenge in attracting professionals “comes down to salary.”

But the problem of hiring IT security professionals isn’t limited to government, according to Jon Oltsik, an analyst at Enterprise Strategy Group (ESG).

In a survey earlier this year of about 300 security professionals by ESG, 65% said it is “somewhat difficult” to recruit and hire security professionals, and 18% said it was “extremely difficult.”

“The available pool of talent is not really increasing,” said Oltsik, who says that not enough is being done to attract people to study in this area.

Oltsik’s view is backed by a Rand study, released in June, which said shortages “complicate securing the nation’s networks and may leave the United State ill-prepared to carry out conflict in cyberspace.”

The National Security Agency is the country’s largest employer of cybersecurity professionals, and the Rand study found that 80% of hires are entry level, most with bachelor’s degrees. The NSA “has a very intensive internal schooling system, lasting as long as three years for some,” Rand reported.

Oltsik said if the states can’t hire senior people, they should “get the junior people and give them lots of opportunities to grow and train.” Security professionals are driven by a desire for knowledge, want to work with researchers and want opportunities to present their own work, he said.

Another way to help security efforts, said Oltsik, is to seek more integrated systems, instead of lot of one-off systems that require more people to work on them.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

Posted in Tech | Tagged , , , , , | Leave a comment

How to ensure the success of your private PaaS project

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Building a private platform-as-a-service (PaaS) cloud that provides on-demand access to databases, middleware, presentation layer and other services can enable consumer agility, lower the cost to maintain that agility, and increase the utilization of on-premise resources.

That is a trifecta of self-reinforcing value for the business. Agility and cost have traditionally been thought of as tradeoffs in IT, but thanks to standardization, consolidation and automation that is tightly coupled with the technology used to provide cloud services, private PaaS clouds have the potential to eliminate that trade off.

In order to achieve those simultaneous benefits, it’s important to think about why you are implementing a private cloud, what workloads make sense for that private cloud, and how you intend to marry the two together. Let’s look at five practical considerations for a successful discrete private cloud implementation project that can form the building block of an eventual larger-scale transformation.

* Suitable workloads. Most IT services are used to either run the business or grow/transform the business. Run the business activities such as ERP, CRM, Finance, HR and similar tend to have stable workloads, usually consisting of small deviations around a moving average, perhaps also with relatively predictable spikes such as seasonal or periodic variations. These activities also have a generally lower rate of change because they are ingrained in organizational processes, and are often centrally-managed for the same reason.

On the other hand, grow/transform activities involve launch of new offerings, big data analysis, cross-channel marketing/selling, and organizational change, all of which have unpredictable workloads and high ongoing rates of change.

Your intuition might tell you that the grow/transform activities are naturals for private PaaS clouds. The agility gained by developers allows them to provide new services on a short timeframe, while also allowing those services to be rapidly decommissioned if circumstances change. It’s also fairly obvious that many “run the business” workloads do not need the agility of private clouds, and in fact it may be a high-risk maneuver to place them on a shared services environment.

There is, however, a gray area in times of business change. During these periods of evolution many “run the business” applications are forced to act more like “grow/transform the business” applications – with high rates of change, variable workloads and the need for rapid provisioning/decommissioning. In those cases, the neat segregation between workloads breaks down. As such, a private PaaS cloud needs to be able to provide services that address the needs of both types of applications. It needs to provide agility… but with the reliability/security/scalability of traditional IT services.

* What services to offer? When it comes to figuring out what services to offer, the answer lies with your users. Help them prioritize their needs and offer as few services as possible.

Customers can have hundreds of variants in their IT environment. This sprawl is often the result of lack of governance, lack of standardization, and a bottoms-up/best-of-breed mentality that resulted in “configuration pollution” (a wide variance among arguably-similar stack configurations). Managing such an environment is expensive and inefficient.

Example of categories for a Private Cloud Service Catalog.
Compare that complexity to the service catalog of most public cloud providers. For example, the Oracle Database Cloud Service has only a few offerings. Not 50, or 500 or 5,000. Your private cloud service catalog should look and sound more like a public provider’s catalog. Doing that requires making choices about standardization and consolidation. Sometimes these choices are politically challenging, but they need to be made nonetheless or your private cloud will not provide the cost optimization that it should.

* Is chargeback necessary? Chargeback/showback is the idea of passing consumption costs back to the consumer either via internal automated transfer costs (chargeback) or simply via reporting (showback). It sounds great on paper, and is a relatively simple matter to execute technically with a fully-integrated cloud management regime (since the software that does the automated provisioning knows who’s using what at all times). But the transparency it provides is truly transformational to an organization, and therefore has political and human consequences. A well-implemented private PaaS cloud will automatically have all the information IT needs to make costs transparent, but making those costs visible is an organizational decision.

* PaaS versus IaaS? To reiterate, if your goal is agility and cost reduction, PaaS gives you more flexibility, more efficiency and more value than infrastructure-as-a-service (IaaS). The raw shared compute and storage (think hypervisors, guest OSs, etc.) that IaaS provides are simply containers that then need to be installed, configured and managed, and that cost lives somewhere (either in the provider or the consumer’s bucket).

Furthermore, because most organizations pass the configuration effort onto the consumers, the tendency for “bottoms-up” configuration pollution continues to be a problem. I call this “cost shifting”. PaaS, on the other hand, provides instantly-consumable services (database, middleware, presentation layer, etc.) in standardized configurations that can be managed with minimum effort on an individual basis and with maximum efficiency on an enterprise scale. IaaS provides efficiency but primarily just shifts costs around. PaaS doesn’t just shift costs around, it eliminates a substantial portion of them outright.

Chart of service types

* How to succeed? The most successful model I’ve seen to introduce PaaS to an organization is to start small, with a well-defined scope. Pick a service, or two services, and a defined user base (say, a particular LOB development organization in a “grow the business” activity) and let them see what PaaS can do for them.

In conclusion, PaaS clouds offer an unprecedented opportunity to simultaneously lower costs, increase agility and maximize utilization. They also carry the potential for meaningful cultural transformation by making IT costs transparent. Unlocking that value requires careful up-front analysis and an unwavering commitment to consolidation, standardization and automation — and most importantly, simplicity. But with the proper commitment, the rewards can be tremendous.


 

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged | Leave a comment

Apple quickly replaces bungled iOS 8 update

Offers iOS 8.0.2 to users as replacement for Wednesday’s botch

Apple yesterday released iOS 8.0.2, a replacement for the botched update that shipped the day before but crippled iPhone 6 and iPhone 6 Plus devices by knocking them off their mobile carriers.

The turn-around for 8.0.2 was notable for its speed: Less than 36 hours after Apple yanked the flawed iOS 8.0.1, it began offering the substitute to customers.

iOS 8.0.2 Apple    
Apple cranked out the replacement for Wednesday’s crippling update in under 36 hours.

“Fixes an issue in iOS 8.0.1 that impacted cellular network connectivity and Touch ID on iPhone 6 and iPhone 6 Plus,” the note accompanying iOS 8.0.2 stated.

Computerworld confirmed that the iOS 8.0.2 update installed on both the newest iPhones as well as on older models without problems, and without blocking phone calls on the iPhone 6 and 6 Plus.

According to Apple, iOS 8.0.1 had affected only the newest iPhones, although there were scattered reports — including a small handful from Computerworld readers — that they had experienced problems with their iPads after installing Wednesday’s update during the narrow 90-minute window that it was available that day.

Customers had blasted Apple for the bungled iOS 8.0.1, wondering how the company had not caught the problem in testing. Some had demanded compensation for their troubles.

iPhone owners who downloaded iOS 8.0.1 while it was available, but did not finish the installation process before Apple pulled the update must first delete it from their devices before they will be able to retrieve 8.0.2. To delete the iOS 8.0.1 download, users must touch the “Settings” icon, then “General,” next “Usage,” and finally “Manage Storage.” Tapping the iOS 8.0.1 item and selecting “Delete” will remove the obsolete update.

OS 8.0.2 can be downloaded over the air from iPhones, iPads, iPad Minis and iPod Touches, or through iTunes. From an iPhone, for instance, users must touch the “Settings” icon, then “General” button on the resulting screen. Tapping “Software Update” will kick off the update process.


Best CompTIA A+ Training Video, CompTIA A+ Certification at certkingdom.com

Posted in Apple | Tagged , , , | Leave a comment

CompTIA Taps Brain Science to Help You Conquer Certification Testing

CompTIA has launched a new e-learning tool that leverages research in neurobiology, cognitive psychology and game studies and uses key principles from each field to help you learn necessary information quickly and retain it long-term.

The CertMaster training program, available for A+, Network+, Security+ and Strata IT Fundamentals certification tracks, features a variety of techniques to help you learn, including adaptive learning, spacing and motivation triggers, says Terry Erdle, CompTIA’s executive vice president of certification and learning, and to give students the confidence to pass the test and move on to an IT career.

Closing the IT Training Gap

“We’ve seen over the years that we train far more people than we test,” Erdle says. “While there’s tremendous effort to train on tests like A+ and Network+, students don’t always follow through and actually take the exam, and we are trying to address some of the reasons that’s happening,” he says.

Erdle says sometimes this lack of follow-through is cultural, or because of logistical challenges — some students can’t physically get to a testing location, he says — but is most often a result of an intense fear of failure or lack of confidence in the ability to learn and retain the information.

“We’re trying to overcome these challenges by tapping into brain science and figuring out how best to prepare students to both sit a challenging exam and pass, and also how to have confidence that those skills and knowledge will stay with them as they move into new IT jobs,” Erdle says.

The Science of Games

Most of the scientific research CompTIA used to develop CertMaster is the same as that used by the gaming industry to keep players engaged and energized while playing, and encourages a sense of progression, risk, achievement and curiosity. This pings dopamine levels — since it feels good to succeed — and creates a positive feedback loop, making it easier to retain information and skills. CertMaster also provides immediate, high-level feedback to encourage students to learn from mistakes and lessen the risk they’ll abandon the course, says Erdle.

Erdle says CertMaster’s personalized, adaptive learning and data analytics technology will customize the training to each individual’s strengths, weaknesses and level of knowledge and retention.

[Related: How Gamification Makes Customer Service Fun]

“In traditional learning, repetition is how material is taught. But in adult learning situations, each adult brings a different level of knowledge about different subjects, and it’s hard to know what’s relevant for each one,” he says. “CertMaster can quickly figure out and then benchmark each student’s knowledge so they’re not hammering on material they already know,” he says.

Data analytics can also predict when students have reached their maximum learning capacity and need to take a break.

CompTIA CertMaster is available starting June 4, 2014, and costs $139.00 per course, Erdle says. Discount pricing is available for academic institutions, and separate channel partner pricing is also available, he says. CertMaster can be accessed via any Web browser, and on both iOS and Android mobile devices. Visit certification.comptia.org for even more information.


Best comptia A+ Video Training, Comptia Network+ Certification at Certkingdom.com

Posted in Comptia | Tagged | Leave a comment

Hackers compromised nearly 5M Gmail passwords

Gmail users urged to change passwords after apparent attack

Security experts are urging Gmail users to change their passwords amid reports that hackers gained access to the credentials of 5 million users of the free email service. Some password combinations have been spotted on Russian cybercrime forums.

Peter Kruse, head of the eCrime unit at CSIS Security Group in Copenhagen, told Computerworld that most of the nearly 5 million stolen Gmail passwords are about three years old, but many are still legitimate and functioning.

He said that CSIS experts suspect that several hackers worked on an endpoint compromise to exploit vulnerable network protocols.

Google did not respond to a Computerworld request for comment but has told other news outlets that it has found no evidence that their systems have been compromised.

Google’s cloud-based email service is used by individuals as well as enterprises.

Russian media outlet RIA Novosti reported that hackers have stolen and published a database containing the Google account logins and passwords to a Bitcoin Security online forum.

The database reportedly contains 4.93 million Google accounts from English, Russian and Spanish users.

Kruse said the discovery of the hack comes just days after more than 4.6 million Russian-based Mail.ru accounts and 1.25 million Yandex e-mail boxes were reportedly compromised. Yandex is the largest Russian-based search engine.

 


Best comptia A+ Video Training, Comptia Network+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , | Leave a comment

Intel’s Core M chips headed to 20 Windows tablets, hybrids

Intel’s new Core M chips — which bring PC-like performance to paper-thin tablets — will initially be in many Windows 8.1 tablets, but no Android devices are yet on the radar.

The chips will be in five to seven detachable tablets and hybrids by year end, and the number of devices could balloon to 20 next year, said Andy Cummins, mobile platform marketing manager at Intel.

Core M chips, announced at the IFA trade show in Berlin on Friday, are the first based on the new Broadwell architecture. The processors will pave the way for a new class of thin, large-screen tablets with long battery life, and also crank up performance to run full PC applications, Intel executives said in interviews.

“It’s about getting PC-type performance in this small design,” Cummins said. “[Core M] is much more optimized for thin, fanless systems.”

Tablets with Core M could be priced as low as US$699, but the initial batch of detachable tablets introduced at IFA are priced much higher. Lenovo’s 11.6-inch ThinkPad Helix 2 starts at $999, Dell’s 13.3-inch Latitude 13 7000 starts at $1,199, and Hewlett-Packard’s 13.3-inch Envy X2 starts at $1,049.99. The products are expected to ship in September or October.

Core M was also shown in paper-thin prototype tablets running Windows and Android at the Computex trade show in June. PC makers have not expressed interest in building Android tablets with Core M, but the OS can be adapted for the chips, Cummins said.

The dual-core chips draw as little as 4.5 watts, making it the lowest-power Core processor ever made by Intel. The clock speeds start at 800MHz when running in tablet mode, and scales up to 2.6GHz when running PC applications.

The power and performance characteristics make Core M relevant primarily for tablets. The chips are not designed for use in full-fledged PCs, Cummins said.

“If you are interested in the highest-performing parts, Core M probably isn’t the exact right choice. But if you are interested in that mix of tablet form factor, detachable/superthin form factor, this is where the Core M comes into play,” Cummins said.

For full-fledged laptops, users could opt for the upcoming fifth-generation Core processor, also based on Broadwell, Cummins said. Those chips are faster and will draw 15 watts of power or more, and be in laptops and desktops early next year.

New features in Core M curbed power consumption, and Intel is claiming performance gains compared to chips based on the older Haswell architecture. Tablets could offer around two more hours of battery life with Core M.

In internal benchmarks, the dual-core Core M 5Y70 CPU provided faster application and graphics performance compared to the Haswell-based Core i5-4302Y chip operating at 4.5 watts. The Core M chip was faster by 19 percent on office productivity, 12 percent on Web applications, 47 percent on 3D graphics and 82 percent on video conversion.

The new 14-nanometer manufacturing process also helped reduce the Core M size and power consumption. Intel’s current chips are made using the 22-nm process.

“We needed to have smaller transistors and smaller die, which leads to a smaller package” that can fit inside thin tablets, Cummins said.

More innovative features are in store for devices with Core M. Starting in early 2015, there will be an option for wireless docking through WiGig, a wireless technology faster than Wi-Fi. Intel is currently developing a “smart dock” through which laptops can wirelessly connect to monitors and external peripherals like mice and keyboards.


Best comptia A+ Video Training, Comptia Network+ Certification at Certkingdom.com

Posted in Intel | Tagged , , | Leave a comment