Worst security breaches of the year 2014: Sony tops the list

Theft of credit card numbers from stores was the major trend in data breaches, signaling the maturity of for-profit cybercrime networks

As 2014 winds down, the breach of Sony Pictures Entertainment is clearly the biggest data breach of the year and among the most devastating to any corporation ever.

Attackers broke in and took whatever they wanted, exfiltrating gigabytes and gigabytes of documents, emails and even entire movies, apparently at will for months and months on end.

Posting the stolen data and the celebrity nature of much of it has resulted in a public relations nightmare for the company. It revealed snarky personal comments never meant to go public as well as personal information such as Social Security numbers and salaries and competitive information about projects in progress.

The scenario is any corporate IT security pro’s worst fear – being pwned and hung out to dry publicly. Add to that lawsuits being filed against Sony by former employees seeking damages they say they suffered because the company failed to adequately protect the data.

Whereas most breaches are carried out for profit – such as theft of credit card information – this attack was intended to hurt its victim as much as possible on multiple fronts and has been very successful.

Many of the big for-profit breaches involved compromises of the credit/debit card swiping machines at retail stores, among them Target, Home Depot, Neiman Marcus, Michael’s and PF Chang.

A common way the crooks got in was by infiltrating trusted business partners and stealing legitimate credentials for accessing the victims’ networks. Once inside, they moved from machine to machine until they reached the subnets containing point-of-sale machines, which they infected with scrapers to steal card numbers and expiration dates.

Sony’s woes dominate headlines about hacks, there were some other significant break-ins this year. Here are a few of them briefly described.

Sony
How they got in – Unknown. Speculation ranges from an attack launched in a Thailand hotel to an inside job.
How long they went undetected – Unknown.
How they were discovered – On Nov. 22 employee computers received messages threatening public distribution of stolen data and displays of skulls on their screens.

Target
The Target breach happened last year but the important details came out this year so it’s included here.
Data compromised – 40 million credit and debit cards, 70 million phone numbers, mailing addresses and email addresses.
How they got in – Hacking the credentials of a legitimate business associate, an HVAC company, to get on Target’s network, then installing malware on point-of-sale machines.
How long they went undetected – About two weeks.
How they were discovered – The Department of Justice told them about it, but anti-malware software flagged the problem as well.

Home Depot
Data compromised – As many as 56 million credit cards put at risk, 53 million email addresses
How they got in – Via a third-party vendor’s credentials followed up by exploiting an unpatched Windows flaw.
How long they went undetected – From April to September.
How they were discovered – The stores’ executives were told by bank and law-enforcement officials.
Goodwill Industries (C&K Systems)
Data compromised – 868,000 credit/debit card numbers.

How they got in – By infecting point of sales card-swipe machines after compromising the network of the operator of the machines. Two other unnamed clients of C&K Systems were also compromised.
How long they went undetected – 18 months.
How they were discovered – Federal officials and payment card investigators told them.

JP Morgan
Data compromised – Phone numbers and email addresses for 76 million households plus 7 million small businesses.
How long they went undetected – Three months
How they were discovered – Internal investigation as well as outside data about a massive stolen credit card ring.
How they got in: Criminals compromised the computer an employee with special privileges that was used both at work and at home.
Data compromised – An unconfirmed number of credit card numbers, but possibly as many as an estimated 7 million
How they got in – Undisclosed but point-of-sales systems were compromised
How long they went undetected – Nine months.
How they were discovered – The Secret Service told them about the breach

Neiman Marcus
Data compromised – 350,000 payment cards
How they got in – Uncertain but point of sales systems were compromised
How long they went undetected – Three months.
How they were discovered – Credit card processors warned about a possible breach and a consultant confirmed it.

Michaels
Data compromised – 2.6 million credit/debit cards
How they got in – Undisclosed but point-of-sale machines were infected
How long they went undetected – Eight months
How they were discovered – Undisclosed


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , , | Leave a comment

Hardware torpedoes IBM’s Q4 revenue

Sluggish sales of IBM mainframes and other hardware put a damper on the company’s latest quarterly earnings report

Still hampered by slow hardware sales, IBM reported a 5.5 percent decline in revenue for the fourth quarter, even as it managed to post a 6 percent gain in net income.

Because of the sluggish revenue, IBM senior management will forgo their bonuses, or “personal annual incentive payments,” for the year, said Ginni Rometty, IBM chairman, president and CEO, in a statement.

[ For quick, smart takes on the news you’ll be talking about, check out InfoWorld TechBrief — subscribe today. | Find out what topics and issues affect tech’s biggest names and news makers in the IDGE Insider CEO interview series. | Read Bill Snyder’s Tech’s Bottom Line blog for what the key business trends mean to you. ]
[ Simon Phipps tells it like it is: Why software patents are evil. | Read Bill Snyder’s Tech’s Bottom Line blog for what the key business trends mean to you. ]

IBM’s fourth-quarter revenue was $27.7 billion, compared with $29.3 billion in the fourth quarter of 2012, the company announced Tuesday. IBM’s revenue fell short of analysts’ expectation of $28.2 billion, an estimate provided by Thomson Reuters. Revenue for the entire year was $99.8 billion, compared with $104.5 billion in the year prior, a 4.6 percent decrease.

IBM’s fourth-quarter income was $6.2 billion, compared with $5.8 billion in the fourth quarter of 2012. For the year, IBM reported $16.5 billion in income, down 1 percent from $16.6 billion in the prior year.

Revenue from IBM’s Systems and Technology hardware segment was $4.3 billion, down 26 percent from the fourth quarter in 2013. For the year, Systems and Technology delivered $14.4 billion, a decrease of 18.7 percent from the full year 2012.

The services divisions produced so-so results for the company. Revenue from Global Technology Services was $9.9 billion for the quarter, down 3.6 percent from $10.3 billion the same quarter a year before. Revenue from the Global Business Services segment grew slightly, up 0.6 percent to $4.7 billion for the fourth quarter, which ended Dec. 31.

For the year, Global Technology Services revenue shrank to $38.5 billion, down 4.2 percent from $40 million the year before. Global Business Services revenue also shrank by 0.9 percent, to $18.4 billion from $18.6 billion a year ago.

Revenue from the software business grew modestly. For the fourth quarter of 2013, the software group logged $8.1 billion in revenue, a 2.8 percent increase from $7.9 billion in the same quarter a year ago. For the year, the IBM software group generated $26 billion in revenue, up 1.9 percent from $25.4 billion in 2012.

“Our software, services and financing businesses are all on solid ground, but in hardware, we’ve entered the back-end mainframe product cycle, and we are dealing with some challenges in other areas. These are impacting our overall results,” said Martin Schroeter, IBM chief financial officer, in a webcast to investors.

With hardware, IBM was plagued in a number of areas. System Z and mainframe sales were down because they are in-between product releases. Other areas of hardware are feeling the impact of “business model issues due to market shifts,” some of which is coming from pricing pressure from lower-cost hardware alternatives, Schroeter said.

System Z sales were down 37 percent, when compared to a very strong quarter a year ago. Sales of MIPS (Microprocessor without Interlocked Pipeline Stages) mainframe systems declined 26 percent, also compared to a very strong quarter a year ago. Sales of Power systems declined 31 percent. While the company continues to ship Power systems, the greater efficiency of the newer systems reduces the size of the systems being shipped, lowering revenue for IBM, Schroeter said. System X sales were down 15 percent.

Pure systems, a new offering introduced last year, provided one bright spot on the hardware side. IBM shipped more than 2,500 Pure systems in the past quarter, and 10,000 since launch.

Another area of concern for IBM has been sales in China, which declined by 23 percent, chiefly in hardware sales.

A large part of this decline came from a broad-reaching Chinese government economic reform initiative, which has stalled state agency IT purchases. This initiative also slowed sales in IBM’s last quarter as well.

“While there is more clarity in the overall plan, we continue to believe it will take some time for business in China to improve,” Schroeter said.

In contrast, revenue in Japan grew by 4 percent, and has grown for the past five quarters. Schroeter attributed this success to IBM’s capability to shifting market focus and investment to meet current needs in IT.

In the past few weeks IBM announced two major initiatives. The company plans to invest an additional $1.2 billion to beef up its cloud infrastructure. It has also launched a new business group focused on providing Watson-style cognitive computing capabilities to help organizations make better use of their large amounts of data.

The company is planning on both initiatives to lead to substantial business over time.

“We believe that data as a natural resource will drive demand going forward, and big data analytics will provide the basis for competitive differentiation,” Schroeter said.

Data analysis is now “nearly a $16 billion” annual business for the company, he said. Cloud business accounts for $4.4 billion in revenue for the company, of which $1.7 billion was delivered as a cloud service.

IBM continued to perform well for investors. This quarter, the company posted earnings per share of $5.73, a 12 percent increase over EPS of $5.13 for the fourth quarter of 2012. For the year, earnings were $14.94 per share compared with $14.37 per share in 2012, a 4 percent increase.

The company is still on track to reach $15 per share by 2015, Schroeter said.


Comptia A+ Training, Comptia A+ certification

Best IBM Certification Training and IBM Exams Training  and more IBM exams log in to Certkingdom.com

Posted in IBM | Tagged | Leave a comment

11 Cyber Monday tech deals that truly save you serious money

Real deals, not cyber scams
If you want to see how morally bankrupt the post-Thanksgiving shopping season has become, just poke around online during “Cyber Monday.” You’ll find many of our nation’s major retailers marking up their list prices to advertise “savings” that don’t actually exist, and pushing “limited-time” offers that are readily available elsewhere. But worry not; we’ve dug through these borderline scams to find 11 deals you should actually know about.

Motorola.com: Unlocked Moto X for $140 off
The 2014 Moto X is one of our favorite Android phones. You can customize it with different colors and textures (including real leather and wood), and it bucks the bloatware trend among Android handsets. Motorola’s also good about updating its software—the Moto X is already running Android 5.0 Lollipop. The $140 discount starts Monday at 12 noon Eastern time.

Why it’s a good deal: The discounted base price of $360 is killer for an unlocked “hero” phone, and AT&T or T-Mobile will give a discount on wireless service. [Link]

Best Buy: LG G3 for $1 on-contract
The LG G3 was this year’s sleeper hit among Android phones, and unquestionably the one to get if you value camera quality above all else. With laser-assisted auto-focus, the G3 lines up shots quickly and excels in low light, so you rarely have to call for a do-over.

Why it’s a good deal: Most carriers are still selling the G3 for its sticker price of $199 on contract. While that price will probably fall as the new year rolls around, it doesn’t get any better than a buck right now. [Link]

Microsoft Store: Acer Aspire E15 for $399
The Aspire E15 is a run-of-the-mill budget notebook, with an Intel Core i5 processor, 4GB of RAM, a 500GB hard drive and a built-in DVD player. But because it comes from the Microsoft Store, it has none of the trialware and bloatware that comes standard on laptops from other major retailers. That alone makes it worth a look.

Why it’s a good deal: This is one of the rare Cyber Monday laptop deals that packs Intel Core i5 power for $400. Just don’t expect miracles from the display and build quality. [Link]

Newegg: Samsung 500GB SSD with Far Cry 4 for $180
With many new PC games gobbling gigabytes by the dozen, you’re going to need a roomy solid state drive to run them at top speeds. Samsung’s 840 EVO SSD has a whopping 500GB of storage and respectable read/write speeds of 540Mbps and 520Mbps, respectively. There’s also a handy transfer tool for upgrading from a smaller drive.

Why it’s a good deal: Newegg has a bunch of storage deals right now (including a $50, 128GB SSD from Sandisk) but $180 is darned cheap for a 500GB drive. The free copy of Far Cry 4 (normally $60) is the cherry on top for your new PC gaming rig. [Link]

Walmart: PlayStation 4 bundle for $449
Console bundles are everywhere this holiday season, but Walmart’s $449 bundle will be hard to beat, especially for families. It includes the PlayStation 4 console, LittleBigPlanet 3, Lego Batman 3, your choice of another game, and a second controller.

Why it’s a good deal : The PS4 normally costs $400, and most other holiday bundles are throwing in a game or two for free. This bundle has three games and an extra controller, so you’re getting about $120 in value over other deals. [Link]

MacMall: 13-inch MacBook Pro with Retina Display for $1,030
Apple’s current MacBook Pros are over a year old now, but they’re still among the best professional-grade laptops you can buy. The discounted model has a dual-core Intel Core i5 processor, 4GB of RAM and 128GB of solid state storage, and it lasted nearly 11 hours in Macworld’s battery test.

Why it’s a good deal: You rarely see Apple products discounted by more than $100 on Black Friday or Cyber Monday, but MacMall’s MacBook Pro deal manages to be $270 off the sticker price. [Link]

Google Play: LG G Watch for $99, $50 of Store credit
The LG G Watch, one of the first wave of Android Wear smartwatches, was quickly upstaged by classier-looking wearables such as the Moto 360 and LG’s own G Watch R. Still, it does a decent job of showcasing how Android Wear works, and it’s practically an impulse buy for the curious at $99.

Why it’s a good deal : The $50 credit toward apps, videos and games from the Google Play Store effectively halves the G Watch’s price if you were planning to buy some content anyway. You can still get the $50 credit when paying full price for a G Watch R, Asus Zenwatch, Samsung Gear Live, Sony SmartWatch 3 or Nexus 9 tablet. [Link]

B&H: iMac with Retina Display for $2,299
Apple’s iMac with Retina Display is a fine piece of machinery, packing 14.7 million pixels into its 27-inch “5K” panel. B&H is knocking $200 off the base model, which includes a 3.5GHz quad-core Intel Core i5 processor, 8GB of RAM and 1TB of fusion drive storage.

Why it’s a good deal: You don’t often see big discounts on Apple products, especially brand-new ones. B&H’s discount doesn’t make the Retina display iMac cheap by any means—rather, a slightly easier splurge. [Link]

Microsoft Store: $100 to $150 off the Surface Pro 3
The Surface Pro 3 is a shining example of what a high-end Windows machine can be, weighing as little as an 11-inch MacBook Air but with a taller, higher-res touchscreen. Detach the keyboard cover, and you have a 1.7-pound tablet with a pen for sketching and a kickstand. Microsoft is knocking $100 off the price for Core i5 models, and $150 off for Core i7 models.

Why it’s a good deal: The discount brings the base price to $1,030 with 128GB of storage and 4GB of RAM. That’s just $30 more than a 13-inch MacBook Pro with similar specs. If you missed the same deal on Black Friday, now’s the time to pull the trigger. [Link]

Staples: Acer Chromebook for $150
Like all other Chromebooks, this one can’t run traditional Windows software such as Office and iTunes. But Acer’s CB3-111-C670 Chromebook gets you online with a full mouse and keyboard at your disposal. It has an 11.6-inch, 1366×768 display, Celeron processor and 2GB of RAM, which should be all you need for basic browsing.

Why it’s a good deal: Normally, Asus’ competing 11-inch Chromebook is the slightly better buy, but these are two very similar machines. The $50 discount on the Acer is just enough to give it the edge. [Link]

Dell: 22-inch 1080p monitor for $99
The holiday shopping season can be a good time to upgrade aging computer monitors, and Dell’s deal in particular is worth a look. The S2240L on sale for $99 has a 21.5-inch display, narrow bezels and a choice of VGA or HDMI input. The screen also tilts from 5 degrees down to 21 degrees up.

Why it’s a good deal: You don’t typically see 22-inch monitors of decent quality cracking the $100 barrier, so multi-monitor users may want to think about stocking up. You’ll have to move quickly, though, as Dell says it will have limited quantities starting at 8 a.m. Eastern. [Link]


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged | Leave a comment

The top infosec issues of 2014

Security experts spot the trends of the year almost past
There is still time for any list of the “top information security issues of 2014” to be rendered obsolete. The holiday shopping season is just getting into high gear, after all, and everybody knows it was from late November to mid-December last year when the catastrophic Target breach occurred.

But this list is about more than attacks and breaches – it is about broader infosec issues or trends that are likely to shape the future of the industry.
MORE ON NETWORK WORLD: Free security tools you should try

Several experts offered CSO some thoughts on their top picks, what can be learned from them and whether that knowledge can help organizations improve their security posture in the coming year.

Cyber threats trump terrorism
An Associated Press story this past week on the federal government’s $10-billion annual effort to secure its multiple agencies noted, almost in passing, that, “intelligence officials say cybersecurity now trumps terrorism as the No. 1 threat to the U.S.”

That makes sense to Sarah Isaacs, managing partner at Conventus. While cyber attacks have been expanding and evolving for decades, Isaacs said there has been a qualitative change: It is not just criminals trying to steal money – it is nation states using it for espionage and even military advantage.

Be sure not to miss:
Free security tools you should try

In May, “the Department of Justice indicted five members of China’s People’s Liberation Army on felony hacking charges for stealing industrial secrets,” she said. “We’ve never seen that before.”

Then in September, “NATO agreed that a cyber-attack could trigger a military event,” she said. “This is about more than protecting credit cards. This is escalating to new levels.”
“Everyone is oversharing everything. The threats are broad and potentially catastrophic.”
sarah isaacs

Sarah Isaacs, managing partner, Conventus
Author, security guru and Co3 Systems CTO Bruce Schneier, would likely agree. In a recent blog post, he wrote that increasingly sophisticated attacks, especially advanced persistent threats (APT) that are not about financial theft, are coming from, “a new sort of attacker, which requires a new threat model.”

There is evidence of that in a recent study by ISACA on APTs. CEO Rob Clyde said 92% of respondents, “feel APTs are a serious threat and have the ability to impact national security and economic stability.”

Clouds – private, public and hybrid – are not new. But the steady increase in the use of cloud storage services is posing larger risks to businesses.

Schneier, in his blog post, said the continuing migration to clouds means, “we’ve lost control of our computing environment. More of our data is held in the cloud by other companies …”

While experts say cloud service providers frequently provide better security, that may not be true of so-called “shadow” or “rogue” use of clouds by workers who believe that is an easier way to do their jobs than going through IT.

Internet of Everything (IoE) – a hacker frontier
The Internet of Things (IoT) is so last year. It is now the IoE. Smart, embedded devices in homes, cars, electronics, machines, and worn by individuals are now mainstream. They already number in the billions, and estimates of their growth range from 50 billion by 2020 to more than a trillion within the next decade.

And that means a growing tsunami of data flowing to the Internet, where it can be sold for marketing purposes or stolen for more malicious means.

Isaacs, who says she is among those who uses an exercise wearable, said she used “dummy data” to register it. “So nobody knows it’s my data,” she said. “It can’t be mapped directly to me.”

In general, however, she said, “everyone is oversharing everything. The threats are broad and potentially catastrophic. I’m very nervous about the smart cars I see.

There does seem to be an increasing awareness of the privacy implications of smart cars. The AP reported this week that 19 automakers that make most of the cars and trucks sold in the U.S. signed on to a set of principles, delivered to the Federal Trade Commission (FTC), that seek to reassure vehicle owners that the information gathered by those vehicles, “won’t be handed over to authorities without a court order, sold to insurance companies or used to bombard them with ads … without their permission.”

The vulnerabilities of “smart” devices to hacking have been demonstrated numerous times, prompting Phil Montgomery, senior vice president of Identiv to call for, “a more regimented standards-based security approach that relies less on outdates processes around username/password technology and more on stronger forms of authentication.”

No parties for third parties
This was the year that the risks of breaches through third-party contractors made it into mainstream consciousness. The Target breach, which exposed 70 million records, was just one of many that came through outside vendors.

Regulatory agencies are trying to maintain that awareness. Stephen Orfei, the new general manager of the Payment Card Industry Security Standards Council (PCI SSC) noted in a recent interview that, “security is only as good as your weakest link – which means the security practices of your business partners should be as high a priority as the integrity of your own systems.”
“Employee negligence was at an all-time high in 2014.”
christine marciano

Christine Marciano, president, Cyber Data-Risk Managers
Christine Marciano, president of Cyber Data-Risk Managers, said that in addition to vetting vendors for rigorous security standards, companies should, “require their vendors to carry and purchase cyber/data breach insurance, to indemnify them for any costs associated with a data breach caused by the vendor’s negligence.”

The porous, sometimes malicious, human OS
While third parties may be a weak link in the security chain, that is less likely due to technology and more due to the human factor.

It was former National Security Agency contractor Edward Snowden who brought the risks of malicious insiders to international attention in 2013, but the danger to enterprises can be just as great from loyal insiders who are simply “clueless or careless,” and fall for social engineering scams.

Joseph Loomis, founder and CEO of CyberSponse, said he is, “sure there are major companies out there with little controls over their employees and their access rights. Who is watching who and what they’re doing?”

It is also about employees controlling themselves when presented with ever-more persuasive social engineering attacks.

The federal government reported earlier this year that 63 percent of the breaches of its systems in 2013 were due to human error.

According to Marciano, “employee negligence was at an all-time high in 2014,” with the problems ranging from, “failure to perform routine security procedures to lack of security awareness, routine mistakes and misconduct.”

Eldon Sprickerhoff, cofounder and chief security strategist at eSentire, noted that, “phishing emails are getting better and better. I’ve seen some that were so well targeted, so well done that I could not tell the difference.”

And it is not just the average worker who is a problem. Identity Finder CEO Todd Feinman said the problem goes all the way to the top. “Many executives don’t know where their sensitive data is so they don’t know how to protect it,” he said.

Ubiquitous BYOD
While BYOD is now mainstream in the workplace, Isaacs calls the increased focus on mobile computing, “very scary, and it’s going to get even worse.”

BYOD is now bringing, “extremely unreliable business applications inside the walls of corporations,” she said. “There are a lot of software vulnerabilities. Every app that is free or 99 cents, probably doesn’t have great level of security. And people don’t install patches either.”

According to Clyde, “there are now many times more mobile devices than PCs in the world. In fact, in many regions of the world, mobile devices are the only way most users connect to the Internet,” yet security remains a relative afterthought.

ISACA found that, “fewer than half (45%) have changed an online password or PIN code.

And now, connected wearable devices (BYOW) are becoming common in the workplace, yet, “a majority of professionals say their BYOD policy does not address wearable tech, and some do not even have a BYOD policy,” Clyde said.

The age of Incident Response (IR)
All of the above issues have led to an increased focus on IR. According to Schneier, this is not just the year but the decade of IR, following a decade of protection products and another of detection products.

In his blog post, he cited three trends: More data held in the cloud and more networks outsourced; more APTs by nation states and; a continuing lack of investment in protection and detection, leaving the bulk of the burden on response.
“Incident Response is, ‘the hardest job in security’.”
tom bain

Tom Bain, vice president, CounterTack
But IR has been more on everybody’s lips in 2014 than even a couple of years ago. The mantra of security experts is that it is not a matter of if, but when, an organization will be breached, and that an effective IR plan (combined with detection) can make attacks more of a nuisance than a disaster.

Getting IR right is crucial, but Tom Bain, vice president of CounterTack, calls it, “the hardest job in security. You can have all the technology in place to detect, prevent and analyze, but if your workflow is broken, or the team is so inundated with incident investigation, you are still vulnerable,” he said.

More regulation, please

An industry that generally decries government regulation – retail – is now singing the opposite tune when it comes to cyber security.

A Nov. 6 letter signed by 44 state and national organizations representing retailers, addressed to the leaders of both houses of Congress, called for, “a single federal law applying to all breached entities (to) ensure clear, concise and consistent notices to all affected consumers regardless of where they live or where the breach occurs.”

Sprickerhoff said such a law would be, “a good first step. There are 38 states with different definitions of what is a breach, so things are getting a bit out of hand,” he said. “If you had unifying description of what needs to be done, that’s not a bad thing.”

But, of course, notification is not the same as improving security. And there are limits to what regulation can accomplish in that area.
“I would prefer that organizations focus on results or outputs, like what was the time from detection to containment.”
richard bejtlich

Richard Bejtlich, chief security strategist, FireEye
“I worry that ‘compliance with frameworks’ attracts a lot of attention,” said Richard Bejtlich, chief security strategist at FireEye. “I would prefer that organizations focus on results or outputs, like what was the time from detection to containment?

“Until organizations track those metrics, based on results, they will not really know if their security posture is improving,” he said.

What to do?
There are, of course, no magic bullets in security. Isaacs said, noting that it’s almost impossible to say what is the biggest threat. “I heard a speech where it was described as, “death by a thousand cuts,” she said.

But experts do have suggestions. Sprickerhoff said more training is crucial, not just the security awareness of employees, but the next generation of IT security experts.

“I don’t think it’s ever been harder to find good people in IT security,” he said. “There’s not much in course work at the college level.”

Eyal Firstenberg, vice president research, LightCyber, said improving security is going to take a combination of technology and training.

“There is a need for fast and accurate alerts and notifications, which ultimately determine the outcome of these cyber engagements,” he said, but added that, “organizations need more professional diagnosticians on staff who are trained to know what threats are real and need to be addressed, and which ones aren’t.”

Ashley Hernandez, an instructor for Guidance Software, calls for more communication among organizations. “Security professionals need to have a way to share intelligence about patterns or attack types to others in their industry or trusted security groups,” she said.

Clyde notes that ISACA, “has a number of programs, from risk governance frameworks like COBIT 5 to the Cybersecurity Nexus (CSX), to ensure cybersecurity professionals have the skills they need to defend enterprises from the plethora of threats.”

Finally, Loomis offers a short list:
Improve procurement processes. “It takes too long to buy new tools,” he said.
Start educating your staff on what the DHS and NIST Frameworks really are. Read the MITRE book on the 10 strategies to a world-class SOC.
Stop believing the marketing and get real-world feedback on tools. “Security has put a lot of money into marketing, but that doesn’t mean the solution is right for the organization,” he said.
Run simulations. “When was the last time a company ran a real cyber drill?” he asked.
Stop following paper policy, “Militarizing your team, running drills, making it second nature is what will help the response process, not following a check list,” he said.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , , , , , , , , | Leave a comment

Room to grow: Tips for data center capacity planning

Capacity planning needs to provide answers to two questions: What are you going to need to buy in the coming year? And when are you going to need to buy it?

To answer those questions, you need to know the following information:
Current usage: Which components can influence service capacity? How much of each do you use at the moment
Normal growth: What is the expected growth rate of the service, without the influence of any specific business or marketing events? Sometimes this is called organic growth.
Planned growth: Which business or marketing events are planned, when will they occur, and what is the anticipated growth due to each of these events?
Headroom: Which kind of short-term usage spikes does your service encounter? Are there any particular events in the coming year, such as the Olympics or an election, that are expected to cause a usage spike? How much spare capacity do you need to handle these spikes gracefully? Headroom is usually specified as a percentage of current capacity.
Timetable: For each component, what is the lead time from ordering to delivery, and from delivery until it is in service? Are there specific constraints for bringing new capacity into service, such as change windows?

From that information, you can calculate the amount of capacity you expect to need for each resource by the end of the following year with a simple formula:
Future Resources = Current Usage x (1 + Normal Growth + Planned Growth) + Headroom
You can then calculate for each resource the additional capacity that you need to purchase:

Additional Resources = Future Resources ñ Current Resources
Perform this calculation for each resource, whether or not you think you will need more capacity. It is okay to reach the conclusion that you don’t need any more network bandwidth in the coming year. It is not okay to be taken by surprise and run out of network bandwidth because you didn’t consider it in your capacity planning. For shared resources, the data from many teams will need to be combined to determine whether more capacity is needed.
Current usage

Before you can consider buying additional equipment, you need to understand what you currently have available and how much of it you are using. Before you can assess what you have, you need a complete list of all the things that are required to provide the service. If you forget something, it won’t be included in your capacity planning, and you may run out of that one thing later, and as a result be unable to grow the service as quickly as you need.
What to track

If you are providing Internet based services, the two most obvious things needed are some machines to provide the service and a connection to the Internet. Some machines may be generic machines that are later customized to perform given tasks, whereas others may be specialized appliances.

Going deeper into these items, machines have CPUs, caches, RAM, storage and network. Connecting to the Internet requires a local network, routers, switches and a connection to at least one ISP. Going deeper still, network cards, routers, switches, cables and storage devices all have bandwidth limitations. Some appliances may have higher-end network cards that need special cabling and interfaces on the network gear. All networked devices need IP addresses. These are all resources that need to be tracked.

Taking one step back, all devices run some sort of operating system, and some run additional software. The operating systems and software may require licenses and maintenance contracts. Data and configuration information on the devices may need backing up to yet more systems. Stepping even farther back, machines need to be installed in a data center that meets their power and environment needs. The number and type of racks in the datacenter, the power and cooling capacity and the available floor space all need to be tracked. Data centers may provide additional per-machine services, such as console service. For companies that have multiple datacenters and points of presence, there may be links between those sites that also have capacity limits. These are all additional resources to track.

Outside vendors may provide some services. The contracts covering those services specify cost or capacity limits. To make sure that you have covered every possible aspect, talk to people in every department, and find out what they do and how it relates to the service. For everything that relates to the services, you need to understand what the limits are, how you can track them and how you can measure how much of the available capacity is used.
How much do you have

There is no substitute for a good up-to-date inventory database for keeping track of your assets. The inventory database should be kept up to date by making it a core component in the ordering, provisioning and decommissioning processes. An up-to-date inventory system gives you the data you need to find out how much of each resource you have. It should also be used to track the software license and maintenance contract inventory, and the contracted amount of resources that are available from third parties.

Using a limited number of standard machine configurations and having a set of standard appliances, storage systems, routers and switches makes it easier to map the number of devices to the lower-level resources, such as CPU and RAM, that they provide. Next: How much are you using now?

Terms to know
QPS: Queries per second. Usually how many web hits or API calls received per second.
Active Users: The number of users who have accessed the service in the specified timeframe.
MAU: Monthly active users. The number of users who have accessed the service in the last month.
Engagement: How many times on average an active user performs a particular transaction.
Primary resource: The one system-level resource that is the main limiting factor for the service.
Capacity limit: The point at which performance starts to degrade rapidly or become unpredictable.
Core driver: A factor that strongly drives demand for a primary resource.
Time series: A sequence of data points measured at equally spaced time intervals. For example, data from monitoring systems.

How much are you using now
Identify the limiting resources for each service. Your monitoring system is likely already collecting resource use data for CPU, RAM, storage and bandwidth. Typically it collects this data at a higher frequency than required for capacity planning. A summarization or statistical sample may be sufficient for planning purposes and will generally simplify calculations. Combining this data with the data from the inventory system will show how much spare capacity you currently have.

Tracking everything in the inventory database and using a limited set of standard hardware configurations also makes it easy to specify how much space, power, cooling and other data center resources are used per device. With all of that data entered into the inventory system, you can automatically generate the data-center utilization rate.
Normal growth

The monitoring system directly provides data on current usage and current capacity. It can also supply the normal growth rate for the preceding years. Look for any noticeable step changes in usage, and see if these correspond to a particular event, such as the roll-out of a new product or a special marketing drive. If the offset due to that event persists for the rest of the year, calculate the change and subtract it from subsequent data to avoid including this event-driven change in the normal growth calculation. Plot the data from as many years as possible on a graph, to determine if the normal growth rate is linear or follows some other trend.

Planned growth
The second step is estimating additional growth due to marketing and business events, such as new product launches or new features. For example, the marketing department may be planning a major campaign in May that it predicts will increase the customer base by 20 to 25 percent. Or perhaps a new product is scheduled to launch in August that relies on three existing services and is expected to increase the load on each of those by 10 percent at launch, increasing to 30 percent by the end of the year. Use the data from any changes detected in the first step to validate the assumptions about expected growth.

Headroom
Headroom is the amount of excess capacity that is considered routine. Any service will have usage spikes or edge conditions that require extended resource usage occasionally. To prevent these edge conditions from triggering outages, spare resources must be routinely available. How much headroom is needed for any given service is a business decision. Since excess capacity is largely unused capacity, by its very nature it represents potentially wasted investment. Thus a financially responsible company wants to balance the potential for service interruption with the desire to conserve financial resources.

Your monitoring data should be picking up these resource spikes and providing hard statistical data on when, where and how often they occur. Data on outages and postmortem reports are also key in determining reasonable headroom.

Another component in determining how much headroom is needed is the amount of time it takes to have additional resources deployed into production from the moment that someone realizes that additional resources are required. If it takes three months to make new resources available, then you need to have more headroom available than if it takes two weeks or one month. At a minimum, you need sufficient headroom to allow for the expected growth during that time period.

Resiliency

Reliable services also need additional capacity to meet their SLAs. The additional capacity allows for some components to fail, without the end users experiencing an outage or service degradation. The additional capacity needs to be in a different failure domain; otherwise, a single outage could take down both the primary machines and the spare capacity that should be available to take over the load.

Failure domains also should be considered at a large scale, typically at the data-center level. For example, facility-wide maintenance work on the power systems requires the entire building to be shut down. If an entire datacenter is offine, the service must be able to smoothly run from the other data centers with no capacity problems. Spreading the service capacity across many failure domains reduces the additional capacity required for handling the resiliency requirements, which is the most cost-effective way to provide this extra capacity. For example, if a service runs in one data center, a second data center is required to provide the additional capacity, about 50 percent. If a service runs in nine data centers, a tenth is required to provide the additional capacity; this configuration requires only 10 percent additional capacity.

The gold standard is to provide enough capacity for two data centers to be down at the same time. This permits one to be down for planned maintenance while the organization remains prepared for another data center going down unexpectedly.

Timetable
Most companies plan their budgets annually, with expenditures split into quarters. Based on your expected normal growth and planned growth bursts, you can map out when you need the resources to be available. Working backward from that date, you need to figure out how long it takes from “go” until the resources are available.

How long does it take for purchase orders to be approved and sent to the vendor? How long does it take from receipt of a purchase order until the vendor has delivered the goods? How long does it take from delivery until the resources are available? Are there specific tests that need to be performed before the equipment can be installed? Are there specific change windows that you need to aim for to turn on the extra capacity? Once the additional capacity is turned on, how long does it take to reconfigure the services to make use of it? Using this information, you can provide an expenditures timetable.

Physical services generally have a longer lead time than virtual services. Part of the popularity of IaaS and PaaS offerings such as Amazonís EC2 and Elastic Storage are that newly requested resources have virtually instant delivery time.

It is always cost-effective to reduce resource delivery time because it means we are paying for less excess capacity to cover resource delivery time. This is a place where automation that prepares newly acquired resources for use has immediate value.

Advanced capacity planning

Large, high-growth environments such as popular Internet services require a different approach to capacity planning. Standard enterprise-style capacity planning techniques are often insufficient. The customer base may change rapidly in ways that are hard to predict, requiring deeper and more frequent statistical analysis of the service monitoring data to detect significant changes in usage trends more quickly. This kind of capacity planning requires deeper technical knowledge. Capacity planners will need to be familiar with concepts such as QPS, active users, engagement, primary resources, capacity limit and core drivers.

Additional math terms
Correlation coefficient: Describes how strongly measurements for different data sources resemble each other.
Moving average: A series of averages, each of which is taken across a short time interval (window), rather than across the whole data set.

Regression analysis: A statistical method for analyzing relationships between different data sources to determine how well they correlate, and to predict changes in one based on changes in another.

EMA: Exponential moving average. It applies a weight to each data point in the window, with the weight decreasing exponentially for older data points.

MACD: Moving average convergence/divergence. An indicator used to spot changes in strength, direction and momentum of a metric. It measures the difference between an EMA with a short window and an EMA with a long window.

Zero line crossover: A crossing of the MACD line through zero happens when there is no difference between the short and long EMAs. A move from positive to negative shows a downward trend in the data, and a move from negative to positive shows an upward trend.

MACD signal line: An EMA of the MACD measurement.

Signal line crossover: The MACD line crossing over the signal line indicates that the trend in the data is about to accelerate in the direction of the crossover. It is an indicator of momentum.


Cisco CCNA Training, Cisco CCNA Certification

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

Posted in Tech | Tagged , , , | Leave a comment

HP talks cloud delivery options, the importance of OpenStack, how it competes on price

An in-depth conversation with Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, about where Helion fits in, cloud consumption models and coming change.

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, brings an interesting perspective to his job given his former role as General Manager of Product Management for Windows Azure, Microsoft’s cloud platform. Network World Editor in Chief John Dix and Senior Editor Brandon Butler got Hilf on the line for his big picture view of the importance of OpenStack, why HP recently acquired Eucalyptus, the impetus to compete on price, and the various cloud delivery options customers are pursuing.

How do you position Helion and where does it fit into the market?

Helion is our brand name for our cloud product portfolio which allows customers to deploy in any cloud context, be it a private cloud or a public or a hosted cloud environment. The applications and data and virtual machines that are going to ride on top of that cloud infrastructure can behave consistently across those different environments.
Bill Hilf, HP Cloud

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud

Enterprises are really struggling trying to do the all-in-one cloud model. But they don’t only use a single operating system or database or management tool, so we believe they will need to create a hybrid cloud environment. It’s not so much because they want to, it’s because they need to given the reality of their existing IT environments.

And what is fundamentally different with our approach is we’re building a composable product portfolio so if a customer wants to have only, let’s say, an application platform or only an infrastructure as a service platform, or wants to bring existing hardware, be it HP or non-HP, into a cloud environment, we need to have something that is composable and flexible.

That led us to probably the most important design decision we made, which was to build this product portfolio with a deep spine of open-source technologies. So we have OpenStack at the core of our IaaS layer and Cloud Foundry at the core of our development platform, but it’s not limited to that. We also support a wide range of open source tools, different types of application technologies, different databases and multiple languages. Really our core DNA is building around open source, which means less vendor lock-in and more flexibility for enterprise customers.

We just started to ship the first production-ready GA version of the Helion OpenStack distribution and Helion development platform which we’ve been working on for the past year and a half, and there are a number of ways customers can pick it up. There is a community version users can download and play with for free, they can buy it as stand-alone software to run on their own gear, they can buy it pre-integrated with HP solutions, or they may consume everything as a service. The latter doesn’t have to be a public cloud. It might be a hosted environment inside an enterprise so the customer can consume everything internally to meet regulatory requirements or policies.

So that’s how it will manifest. Customers will have a choice of different cloud models.

So a customer could have you build a cloud within their organization and run it for them as a service?

Yes. So customers might say, “I want all the benefits of a cloud, the speed, the economics, the self-service, but I want it in my data center and I want you to fully manage it, either remotely or in my environment.” That’s particularly appealing to large enterprises and large government agencies. That model is coming up again and again, and there are lots of different terms for it. You can call it managed private clouds, or a cloud-enabled hosting environment, but it’s essentially what you said.

The capital expense is yours and the customer just pays a service fee?

There are all sorts of ways customers want the mathematics to work. Sometimes they’ll want to be an internal cloud broker, providing services to internal customers. We have a big media customer doing this. They have an internal portal that says, “Hey, do you want compute or storage or networking?” And the internal end user has no idea what is actually providing that. Behind the scenes, based on the requirements and the price point and the constraints the end user describes, they can deliver the services either from their Helion OpenStack private cloud or, in some cases, they go out to a public cloud.

So, for example, if a customer wants extreme commodity storage pricing and they have very few constraints on how that data is stored or where, this internal broker might go back with AWS, but it’s presented to the internal customer just as a storage resource. That’s a really common pattern right now. We call it ‘internal service providers’ but it’s kind of cloud brokering.

Can you describe the difference between Helion OpenStack and the Helion Development Platform?

Helion OpenStack is a distribution of OpenStack built around the current tree of Juno. We don’t go in and swap out core components for HP proprietary stuff. We take the core of OpenStack and then do a whole bunch of work to make it easier to install, patch and configure, because that’s where a lot of the pain points are right now in OpenStack. We also do a lot of security work on it and then run it at very large scale in the HP public cloud to test for reliability. We learn a lot from running OpenStack in a large public cloud environment.

Above that we have the Helion Development Platform, which is a PaaS layer, but think of it as using Cloud Foundry as the host, or the run time, for applications. So it supports all these different languages and you can publish your Java app or node.js app or Python app or Ruby app into that full application lifecycle environment.

Then alongside of that we have a set of application services. So, for example, if someone wants to use database-as-a-service, we have an easy-to-use DB service so a developer can quickly add a database to their app. Behind the scenes we do a binding between that database-as-a-service at the PaaS layer, all the way down into OpenStack’s database-as-a-service offering called Trove. That way we can then offer that database-as-a-service at the development platform layer in a way that’s automatically highly available, and automatically has disaster recovery built in because we’re leveraging the Trove system underneath and providing that resilience to the database behind the scenes.

We’ll do a lot more things like that where we basically illuminate the capabilities inside OpenStack at a higher level for developers to take advantage of. For example, there’s this concept called affinity scheduling inside OpenStack where you can say, assign my VM to a high memory machine or assign these VMs to that data center because that is the only one that’s HIPAA compliant. As that grows in OpenStack, we want to light up that type of capability higher in the platform so it becomes really easy for the developer.

Also, what we use behind the scenes in our Helion development platform is Docker. Every app you build on our Helion development platform instantiates as a Docker container so you can take those Docker containers and assign them wherever you want. We think this Docker + OpenStack combination is going to very powerful.

So, back to your question, they are two different architectural layers. One is targeted at developers, and one is targeted at IT ops. They can be used independently but we’re doing a lot of work to make them better together.

When it comes to use cases for cloud, VMware is positioning its vCloud Air as a natural landing spot for ESX workloads, and Microsoft Azure is a natural spot for Hyper-V and System Center, so where do you see HP being the natural answer?

Because of my Microsoft background I can ask a company what versions of Windows Server and System Center they’re using and I’ll know right away if they’re a Microsoft loyalist or not, and for those customers, the Azure story is compelling. And AWS is definitely the default if you’re a startup and looking for the fastest onramp to getting some compute and storage resources that can scale wide. Where we win are with enterprises that have stepped all the way through the virtualization steps in the past three to four years, companies that have more than 50% of their environment virtualized. Now they’re getting a lot of pressure on being able to go faster.

So what they’re trying to do is take a first step into the cloud, but they are typically encumbered by a tremendous amount of existing IT or security requirements or other business or industry constraints. We have a customer, for example, who just did a few acquisitions, some of which have used public clouds. Their business policy doesn’t allow the use of public clouds so now they have to repatriate those resources back inside their firewall. So we deal with a lot of people who are building private clouds first.

Private cloud on their premise?

Yes. The other big sweet spot for us are service providers and telcos. And there’s a few reasons for that. One, telcos in particular are very open-source oriented. And two, many service providers and telcos are massively threatened by the public cloud vendors. So, if you are a telco or service provider in, let’s say Europe or Asia, Amazon and Google can be really threatening, not just because of their cloud businesses, but because of the whole value chain, all the way down to the device. So they want to ‘OEM’ our public cloud technology because they need to build a competitive offering to an AWS or Google in their markets.

In the enterprise, how critical are network advances such as software defined networking and network function virtualization in supporting this whole hybrid vision?

Frankly, the network is either the enabler or the bottleneck in most cloud deployments because so much of a horizontally scalable distributed system are deeply tethered to network capabilities. So when you start moving to 100 to 1,000 to 10,000 to 100,000 nodes in a system, the network architecture becomes increasingly critical. In our distro of Helion OpenStack we make sure our networking functionality is great upstream in Neutron, which is the network component inside OpenStack, but we also need to be pluggable with other SDN controllers, with VMware NSX, with our own HP SDN, etc. And down the road we’ll have to be pluggable with others that emerge because there won’t be one SDN to rule them all, even though I’m sure some vendors would love to have that control point, but it’s just not realistic.

This is one of the challenges of building commercial open-source products: you have to have as much value as possible without ripping out the flexibility that customers were originally interested in with open source, or without tainting that because it’s very easy to go too far one way or the other where it becomes a Swiss Army Knife. It’s good at a whole bunch of things but not really good at any one thing. Or it goes the other way and becomes extremely proprietary and you kind of lose the reason why you built on open source overall.

One way we’re addressing the specific networking needs for one of our customer segments, communication service providers, is through a partnership with Wind River to integrate their carrier grade technologies into Helion OpenStack. This will provide communications service providers with an open source based cloud platform to meet their demanding reliability requirements and accelerate their transition to NFV deployments. All within our open source model and keeping OpenStack API compatibility.

Are all Helion private clouds based on OpenStack or do you sell some non-OpenStack private clouds as well?

Historically we had a private cloud infrastructure-as-a-service offering called Matrix that was not open source. This was actually before I joined. There are still customers that use that, but over time our plan is to evolve that product with our Helion OpenStack distribution. We will do it in a thoughtful manner so we don’t force customers to rip and replace. But going forward we’ve made a company-wide commitment to OpenStack.

It’s a fundamental bet. We actually got asked once at a very senior meeting, “What’s Plan B if OpenStack doesn’t work out?” I said there is no Plan B. If you have a Plan B, having lived through this at Microsoft, you end up hedging, doing things to secure the option. So you have to go all in if you really want a platform to take off. So it’s a big, fundamental decision for us and a fundamental focus that we have to make OpenStack be what we need it to be for our enterprise customers. There’s not a lot of “let’s sit around and wait for it to evolve.”

There are certainly still some big challenges with OpenStack, but we have many customers who are happily running 100s nodes, many thousands of VMs, in OpenStack for a private cloud and getting great benefit today.

In terms of hypervisor support, do you guys focus on one hypervisor or support a bunch?

At every layer we need to give customers choice. So we support KVM, which is the default people use in most cases, but with this release of our Helion OpenStack we support ESX and very shortly we’ll support Hyper-V.

But at each layer we support choice. At the hardware layer, for example, we support our HP gear but have a certification test for third parties on non-HP gear, and a set of tests and benchmarks we give to third-party OEMs to validate against. We know we’re not going to sell an HP server with every software sale – that’s not reality.

Then even further up the stack we have multiple programming languages and frameworks people can choose from, from Python or Ruby or Java or .NET. That polyglot environment is important for us.

So we’re not only giving customers a choice of where to install and run their cloud, we also give them a lot of choice when it comes to the technology they can use because, at the end of the day, the VMware story is very vertical, the Red Hat story is very vertical, the Microsoft story, even though they talk a lot about open source, is really very vertical. Choice and a platform truly built on open source – that’s a differentiation for us.

If you’re pushing a high-end, enterprise-level story, why on the Helion website are you shouting about price so much? That kind of screams commodity.

As of 2014 less than 10% of enterprise IT is using cloud computing, so the growth opportunity is huge. And when you are trying to fight an early market battle for share, particularly for OpenStack oriented customers, you want to grab as much share as fast as possible.

One of the biggest advantages of a company like HP is we have all sorts of ways we can monetize. We don’t need to sell software at huge margins. We don’t need to sell a server for everything we do. We don’t need to sell services for everything. We have all kinds of ways we can make money through the broad HP. So that gives us a bunch of freedom, actually more freedom than I had at Microsoft because we can do things on every dimension to compete and aggressively grab market share.

And one tool we can use is price. So we can go undercut the other guy because our P&L isn’t solely based on software markets. We certainly compete with other OpenStack distributions like Red Hat. So one of the reasons we’re coming in at the price point we are is because we want to make it zero friction for our customer when they do that comparison of OpenStack distro A versus OpenStack distro B, at every level of comparison.

But, that said, almost everything we do is through a larger enterprise relationship. Typically when an enterprise is buying from HP they’re not making a singular decision for one piece of software or one server order or one set of services. So we talk about the big picture, what our cloud platform can do, how we indemnify our distribution of OpenStack, product capabilities, pricing, the whole thing.

This is really hard when you have a business model that is pegged to one thing like software because you end up between a rock and a hard place because you can’t easily discount below your margin line because it’s very difficult to make that up. Microsoft has a little bit more flexibility because they have such a breadth of software and they have such a breadth of offerings. For Red Hat and VMware it’s a little different because they are bound to their business model, so they have some very hard floors and ceilings in terms of what flexibility they have.

You recently acquired Eucalyptus which doesn’t have big OpenStack roots. They’re mostly about AWS integration. How do you see that fitting in?

Eucalyptus was really two things for us. It was a good collection of people who know how to build cloud software, and it was the AWS interoperability piece. I keep talking about choices, and we realize the design pattern of AWS is hugely relevant. So we needed the ability to tell customers, if you have or are interested in that design pattern, we have a way to support that.

So where we typically see the Eucalyptus demand is where a customer wants to have the ability to move an app out of AWS back to a private or managed cloud environment, or where someone says, I don’t know what’s going to happen yet in terms of going to the public cloud so I’m going to first build my private cloud apps with Eucalyptus and the AWS design pattern (basically meaning using the EC2 APIs, the S3 APIs, etc.), and building it in a way that gives me the flexibility to locate the work where I want.

What should we look for this coming year?

You’ll see us continue to build out our Helion distro of OpenStack and our Helion development platform, so you’ll see new services, new capabilities, that kind of thing. You’ll see us do a lot in the telco/service provider/NFV space.

And later in the year you’ll hear us talk a lot about a new model for enterprises that want to consume managed cloud services but don’t want to buy anything physical, don’t want to own anything anymore, that just want to consume, but in a way that matches their business realities today. We’ll be doing a lot in that space. I’m a believer that the cloud industry we have today is going to look very different in the future as the enterprise really starts adopting cloud technologies – and then all cloud vendors will shape their strategies to fit what enterprises want. So we’re trying to skate to where the puck will be and start to invent some of those new models.

You mentioned that analysts say only 10% of enterprise needs are supported by the cloud today. What’s the timeframe for change?

That’s the multi-trillion dollar question, isn’t it? But I see two enterprise patterns happening right now and this may inform the answer. One is the linear step. I’m going to move from virtualization to private cloud infrastructure-as-a-service, then I’ll try out some of this PaaS stuff to see how that really makes sense. Then I’ll see if I can run that across multiple data centers and then maybe see if a public cloud thing makes sense. So it’s kind of a linear mode.

The other pattern I hear, and this is the riskier one, is where the CIO says any new app inside my enterprise will be built to platform-as-a-service and can have zero knowledge of an operating system underneath it. What they’re trying to do is say, let’s start building in the new cloud-native model so we don’t have to worry about migrations and lift and shift and all of that.

But then there’s another question, and that is, which platform-as-a-service? At some point you’re binding to something, you’re making some commitment to some API somewhere. It may not be at the operating system level anymore. It may be higher up the stack in the middleware.

Then frequently we see customers say, we won’t move our existing resources to a cloud model. We’ll build the next project or the next deployment in a true cloud model. We’ll build that as a stand-alone system and then try to bridge back, usually through management tools, to the old. That is very common as well.


 

Cisco CCNA Training, Cisco CCNA Certification

for more info on HP Training and HP Certification and more log in to Certkingdom.com

Posted in HP | Tagged , , , , , , | Leave a comment

How to set up 802.1X client settings in Windows

802.1X provides security for wired and Wi-Fi networks

Understanding all the 802.1X client settings in Windows can certainly help during deployment and support of an 802.1X network. This is especially true when manual configuration of the settings is required, such as in a domain environment or when fine-tuning wireless roaming for latency-sensitive clients and applications, like VoIP and video.

An understanding of the client settings can certainly be beneficial for simple environments as well, where no manual configuration is required before users can login. You still may want to enable additive security measures and fine-tune other settings.

Though the exact network and 802.1X settings and interfaces vary across the different versions of Windows, most are quite similar between Windows Vista and Windows 8.1. In this article, we show and discuss those in Windows 7.

+ ALSO ON NETWORK WORLD: WHAT IS 802.1X +
Protected EAP (PEAP) Properties

Let’s start with the basic settings for Protected EAP (PEAP), the most popular 802.1X authentication method.
111714 network connection dialog

On a Network Connection’s Properties dialog window you can access the basic PEAP settings by clicking the Settings button.

Next, you move through the settings on this PEAP Properties dialog window.

Validate server certificate: When enabled, Windows will try to ensure the authentication server that the client uses is legitimate before passing on its login credentials. This server certificate validation tries to prevent man-in-the-middle attacks, where someone sets up a fake network and authentication server so they can capture your login credentials.

By default, server certificate validation is turned on and we certainly recommend keeping it enabled, but temporarily disabling it can help troubleshoot client connectivity issues.

Connect to these servers: When server certificate validation is used, here you can optionally define the server name that should match the one identified on the server’s certificate. If matching, the authentication process proceeds, otherwise it doesn’t.

Typically, Windows will automatically populate this field based upon the server certificate used and trusted the first time a user connects.

Trusted Root Certification Authorities: This is the list of certification authority (CA) certificates installed on the machine. You select which CA the server’s certificate was issued by, and authentication proceeds if it matches.

Typically, Windows will also automatically choose the CA used by the server certificate the first time a user connects.

Do not prompt user to authorize new servers or trusted certification authorities: This optional feature will automatically deny authentication to servers that don’t match the defined server name and chosen CA certificate. When this is disabled, users would be asked if they’d like to trust the new server certificate instead, which they likely won’t understand.

We recommend this additive security as well. It can help users from unknowingly connecting to a fake network and authentication server, falling victim to a man-in-the-middle attack. Unlike the two previous settings, you must manually enable this one.

The next setting is where you choose the tunneled authentication method used by PEAP. Since Secured password (EAP-MSCHAP v2) is the most popular, we’ll go through it. Clicking the Configure button shows one setting for EAP-MSCHAP v2: Automatically use my Windows logon name and password (and domain if any).
111714 geier eap mschap

This is the dialog box you see after clicking the Configure button for the EAP-MSCHAP v2 authentication method.

This should only be enabled if your Windows login credentials match those in the authentication server, for instance if the server is connected to Active Directory. After connecting to an 802.1X network for the first time, Windows should automatically set this appropriately.

Back on the PEAP Properties dialog window, under the authentication method, are four more settings:

Enable Fast Reconnect: Fast Reconnect, also referred to as EAP Session Resumption, caches the TLS session from the initial connection and uses it to simplify and shorten the TLS handshake process for re-authentication attempts. Since it helps prevent clients roaming between access points from having to do full authentication, it reduces overhead on the network and improves roaming of sensitive applications.

Fast Reconnect is usually enabled by default when a client connects to an 802.1X network that supports it, but if you push network settings to clients you may want to ensure Fast Reconnect is enabled.

Enforce Network Access Protection: When enabled, this forces the client to comply with the Network Access Protection (NAP) policies of a NAP server setup on the network. For instance, NAP can restrict connections of clients that don’t have antivirus, a firewall, the latest updates, or other health related vulnerabilities.

Disconnect if server does not present cryptobinding TLV: When manually enabled, this requires the server use cryptobinding Type-Length-Value (TLV), otherwise the client won’t proceed with authentication. For RADIUS servers that support cryptobinding TLV, it increases the security of the TLS tunnel in PEAP by combining the inner method and the outer method authentications so that attackers cannot perform man-in-the-middle attacks.

Enable Identify Privacy: When using tunneled EAP authentication (like PEAP), the username (identity) of the client is sent twice to the authentication server. First, it’s sent unencrypted, called the outer identity, and then inside an encrypted tunnel, called the inner identity. In most cases, you don’t have to use the real username on the outer identity, which prevents any eavesdroppers from discovering it. However, depending upon your authentication server you may have to include the correct domain or realm.

This setting is disabled by default and I recommend manually enabling it. After enabling identify privacy, you can type whatever you want as the username, such as “anonymous”. Alternatively, if the domain or realm is required: “anonymous@domain.com”.
Advanced 802.1X Settings

On a Network Connection’s Properties dialog window you can access advanced settings by clicking the Advanced Settings button.
111714 geier advanced 8021x

The first tab is the advanced 802.1X settings.
On the 802.1X Settings tab, you can specify the authentication mode: User, Computer, User or Computer, or Guest authentication.

User authentication will use only the credentials provided by the user, while Computer authentication uses only the computer’s credentials. Guest authentication allows connections to the network that are regulated by the restrictions and permissions set for the Guest user account.

Using the combined User or Computer authentication option allows the computer to log into the network before a user logs into Windows and then also enables the user to login with their own credentials afterward. This enables, for instance, the ability to use 802.1X within a domain environment, as the computer can connect to the network and domain controller before a user actually logs into Windows.

When User only authentication is used, you can click the Save Credentials button to input the username and password. Additionally, you can remove saved credentials by marking the Delete credentials for all users checkbox.

The second section of the 802.1X Settings tab is where you can enable and configure Single Sign On functionality. If the system and network are set up properly, using this feature eliminates the need to provide separate login credentials for Windows and 802.1X. Instead of having to input a username and password during the 802.1X authentication, it uses the Windows account credentials. Single sign-on (SSO) features save time for both users and administrators and help to create an overall more secure network.

Advanced 802.11 Settings

On the Advanced Settings dialog box you’ll see an 802.11 settings tab if WPA2 security is used. First are the Fast Roaming settings:
111714 geier advanced 80211

The second tab on the Advanced Settings window is the advanced 802.1X settings.
Enable Pairwise Master Key (PMK) Caching: This allows clients to perform a partial authentication process when roaming back to the access point the client had originally performed the full authentication on. This is typically enabled by default in Windows, with a default expiration time of 720 minutes (12 hours).

This network uses pre-authentication: When both the client and access points supports pre-authentication, you can manually enable this setting so the client doesn’t have to perform a full 802.1X authentication process when connecting or roaming to new access points on the network. This can help make the roaming process even more seamless, useful for sensitive clients and traffic, such as voice and video. Once a client authenticates via one access point, the authentication details are conveyed to the other access points. Basically it’s like doing PMK caching with all access points on the network after connecting to just one.

Enable Federal Information Processing Standard (FIPS) compliance for this network: When manually enabled, the AES encryption will be performed in a FIPS 140-2 certified mode, which is a government computer security standard. It would make Windows 7 perform the AES encryption in software, rather than relying on the wireless network adapter.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Microsoft | Tagged , , , | Leave a comment

Hey Samsung: Not everybody has to be a platform

It’s easy to see why everybody wants to be a platform these days. Just look at Apple: By owning both the hardware and the operating system, it gets total control over what developers build on its platform — and a sizable cut of the revenues besides. In return, developers get an unmatched distribution channel directly to customers’ devices. As Apple extends to new devices, those developers get to come along.

So it’s no wonder that Samsung, eternally defining itself by its struggles with Apple, wants to be a platform, too, especially in the face of shrinking profits. On paper, it seems so simple: Samsung has the hardware business. It’s making some wearables, investing in a connected home business with the SmartThings acquisition, and getting into virtual reality.

Open some APIs, give out some SDKs, talk about “open” and host a big-time developer conference in San Francisco (as in, the Samsung Developer Conference I write this from) to make sure everybody knows how committed you are.

But what Samsung is lacking, what major platform providers have in spades, is something harder to pin down, and much harder to imitate. Apple, Salesforce, even Microsoft lately, have demonstrated that most vague, but most important notion. They have vision — a clear and present mission that drives them forward, even when that path isn’t immediately obvious.

But Samsung? Samsung has really good phones and some solid tablets and a partnership with Oculus and SmartThings and now Project Beyond, a super nifty 360-degree streaming 3D high definition camera. But in the entire two-hour keynote session this morning, attendees were treated to a rapid-fire string of previously announced non-news like the Simband open health wearable platform (now open for developer sign-ups), a demo of what’s possible with SmartThings and a reaffirmation that the company will keep investing in Samsung Knox, its enterprise workspace feature.

Other than the virtual reality stuff, and the Project Beyond camera, which are actually, really, very cool, it’s mostly a lot of the same old. The only “new” thing coming to Samsung devices is Samsung Flow, a me-too take on Apple’s cross-device Continuity features. Other than that, the company was just trying to show developers that products exist and can be built upon without offering a tremendously compelling case for why. It’s not really leadership material.

When Apple is selling watches, Google is selling Nest thermostats, and Microsoft is revamping Windows for the multi-device future, Samsung’s follow-along mentality of “just add developers” just doesn’t seem like enough, no matter how many sensors it adds to Simband.

(The company’s technical keynote takes place Thursday; maybe there’ll be something more impressive that will change my mind. But I doubt it.)

The point here is that Samsung is a hardware company, in so many ways. It’s succeeded in the first place by making devices that people actually want to use. And part of how it got there was by being part of somebody else’s ecosystem. And yeah, it must chafe those at Samsung corporate command to have Google to thank for the success of the Galaxy S line of phones. But maybe, just maybe, throwing your support behind an operating system that nobody asked for, wants, needs or supports (Tizen) wasn’t the right answer, no matter how technologically proficient it is.

And in the same way people ask whether Microsoft’s hardware business is good for Microsoft’s vision as a service provider, they have to also ask whether this whole insistence on being a software provider is good for Samsung’s business. Nobody seems excessively jazzed about developing for the Samsung-backed Tizen ecosystem in a world where Android and iOS are already pretty well standardized.

“Ecosystem” is just a fancy word for building the stuff that users, not corporations, want. Rather than controlling everything, maybe a renewed focus on being the best part of the Android ecosystem — and on making what customers actually want — would do Samsung good.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , | Leave a comment

Cisco patches serious vulnerabilities in small business RV Series routers

The flaws allow attackers to execute commands, overwrite files and launch CSRF attacks

Cisco Systems released patches for its small business RV Series routers and firewalls to address vulnerabilities that could allow attackers to execute arbitrary commands and overwrite files on the vulnerable devices.

The affected products are Cisco RV120W Wireless-N VPN Firewall, Cisco RV180 VPN Router, Cisco RV180W Wireless-N Multifunction VPN Router, and Cisco RV220W Wireless Network Security Firewall. However, firmware updates have been released only for the first three models, while the fixes for Cisco RV220W are expected later this month.

ALSO: Celebrating 25 years of Cisco Networkers

One of the patched flaws allows an attacker to execute arbitrary commands as root — the highest privileged account — through the network diagnostics page in a device’s Web-based administration interface. The flaw stems from improper input validation in a form field that’s supposed to only allow the PING command. Its exploitation requires an authenticated session to the router interface.

A second vulnerability allows attackers to execute cross-site request forgery (CSRF) attacks against users who are already authenticated on the devices. Attackers can piggyback on their authenticated browser sessions to perform unauthorized actions if they can trick those users to click on specially crafted links.

This vulnerability also provides a way to remotely exploit the first flaw. Researchers from Dutch security firm Securify, who found both issues, published a proof-of-concept URL that leverages the CSRF flaw to inject a command through the first vulnerability that adds a rogue administrator account on the targeted device.

A third security flaw that was patched by Cisco allows an unauthenticated attacker to upload files to arbitrary locations on a vulnerable device using root privileges. Existing files will be overwritten, the Securify researchers said.

Cisco released firmware versions 1.0.4.14 for the RV180 and RV180W models and firmware version 1.0.5.9 for the RV120W.

Users can limit the exposure of their devices to these flaws by not allowing remote access from the Internet to their administrative interfaces. If remote management is required, the Web Access configuration screen on the devices can be used to restrict access only to specific IP addresses, Cisco said in its advisory.


 

MCTS Training, MCITP Trainnig

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

 

 

Posted in Cisco | Tagged , , , , , , , | Leave a comment

IDC: Public cloud to be $127B industry by 2018

Public IaaS market to grow 6x faster than IT market

Research firm IDC’s latest estimate pegs the public IT cloud market at $56.6 billion this year, and it’s expected to grow to a $127 billion market within four years.

The public cloud computing market is still in the early stages of adoption, with rapid growth forecasted in the upcoming years. IDC predicts the cloud market to grow at a compound annual rate of 22.8% each year, which is six times faster than the growth in the overall IT market. By 2018, IDC expects that cloud spending will account for half of software, server and storage spending growth.

The SaaS market is the leader in the cloud, making up 70% of current cloud spending. The IaaS market is the second-largest, while the platform as a service (PaaS) market is the fastest growing, but smallest major segment of the market, IDC says.

One factor that IDC expects will help encourage cloud computing adoption will be the rise of industry-specific cloud offerings. Having cloud computing services tailor-made for specific vertical industries – which is the idea of a “community cloud” will help appeal cloud services more specifically to businesses. “Many of these new solutions will be in industry-focused platforms with their own innovation communities, which will reshape not only how companies operate their IT, but also how they compete in their own industry,” IDC’s Chief Analyst Frank Gens says in new report out today.

The market has already seen some industry-specific customization of cloud services. Cloud providers like Amazon, Microsoft and Verizon have separate cloud IaaS offerings tailored for government workloads, for example. In the SaaS industry, customization to specific vertical industries is becoming more common as well from companies like Salesforce.com.


 


Best comptia A+ Video Training, Comptia Network+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , | Leave a comment

BYOD forces users’ personal information on help desk

BYOD forces users’ personal information on help desk

Help desk staffers can be caught in the middle when BYOD users get verrrry personal with their devices.

As the recent scandal over leaked celebrity photographs reminded us all, people use their electronic devices for very personal pursuits in the era of smartphone ubiquity. Depending on the age and inclination of its owner, a modern-day digital device might contain not just nude selfies like those that were shared online, but images from dating sites like Tinder and Grindr, creepshots, or other salacious or even illegal material downloaded from the backwaters of “the dark Web” via anonymizers like Tor.

As blogger Kashmir Hill summed up as the selfie scandal was unfolding, “Phones have become sex toys.”

If that’s true, then those toys are making their way into the workplace in record numbers, thanks to the ever-increasing number of organizations adopting bring-your-own-device (BYOD) policies.

In a perfect world, none of this should concern help desk employees — with a well-executed mobile management program in place that incorporates containerization, a technician ought to be able to assist employees with corporate apps and data without encountering so much as a pixel of not-safe-for-work (NSFW) material.

But the world isn’t always perfect, as IT support staffers know perhaps more than most. Which means they can find themselves looking not just at enterprise applications but at private images and texts they’d really rather not see. Or politely pointing out to an employee who’s synced all her devices to the cloud that pictures from her honeymoon are currently being displayed on the conference room’s smartboard. Or repeatedly removing viruses picked up by the same users visiting the same porn sites.

The scope of the problem

In a survey published last year by software vendor ThreatTrack Security, 40% of tech support employees said they’d been called in to remove malware from the computer or other device of a senior executive, specifically malware that came from infected porn sites. Thirty-three percent said they had to remove malware caused by a malicious app the executive installed. Computerworld checked with several security experts, none of whom was particularly surprised by that statistic.

The ThreatTrack survey didn’t tease out how much of this was on BYODs. But in a February 2014 survey by consulting firm ITIC and security training company KnowBe4, 34% of survey participants said they either “have no way of knowing” or “do not require” end users to inform them when there is a security issue with employee-owned hardware. Some 50% of organizations surveyed acknowledged that their corporate and employee-owned BYOD and mobile devices could have been hacked without their knowledge in the last 12 months. “BYOD has become a big potential black hole for a lot of companies,” says Laura DiDio, ITIC principal analyst.

One big concern: As McAfee Labs warns in its 2014 Threat Predictions report, “Attacks on mobile devices will also target enterprise infrastructure. These attacks will be enabled by the now ubiquitous bring-your-own-device phenomenon coupled with the relative immaturity of mobile security technology. Users who unwittingly download malware will in turn introduce malware inside the corporate perimeter that is designed to exfiltrate confidential data.”

Today’s malware from porn sites is usually not the kind of spyware that’s dangerous to enterprises, says Carlos Castillo, mobile and malware researcher at McAfee Labs — but that could change. “Perhaps in the future, because of the great adoption of BYOD and people using their devices on corporate networks, malware authors could . . . try to target corporate information,” he says.

In fact, a proof-of-concept application was recently leaked that is designed to target corporate data from secure email clients, Castillo says. The software used an exploit to obtain root privileges on the device to steal emails from a popular corporate email client, alongside other spyware exploits like stealing SMS messages. “While we still have not seen malware from porn sites that is dangerous to enterprises,” Castillo says, “this leaked application could motivate malware authors to use the same techniques using malicious applications potentially being distributed via these [porn] sites.”

Beyond security, there could be legal liabilities in play as well, some analysts caution. For example, a corporation might be liable if an IT staffer saw evidence of child porn on a phone.

To be sure, porn sites cause only a small fraction of the problems that users introduce into the enterprise. According to Chester Wisniewski, senior security advisor at Sophos, some 82% of infected sites are not suspicious places like porn sites, but rather sites that appear benign. And for smartphones, the biggest malware danger is from unsanctioned apps, not NSFW sites, he says.

Roy Atkinson, a senior analyst at HDI, a professional association and certification body for the technical service and support industry, sees no evidence of a widespread problem. When he specifically asked a couple of IT professionals who are responsible for mobile management in their organization, “they told me either ‘we don’t see it’ or ‘we make believe we don’t see it,'” says Atkinson. “People don’t really want to think about this or talk about it much.”

Escalate or let it go?

Whatever the frequency, when and if NSFW issues do arise, the IT department often winds up functioning as a “first responder” that has to decide whether to escalate the incident or let it go. “If somebody complains about [a co-worker] displaying pictures on their smartphone at a meeting . . . then the company’s acceptable use policy will come into play,” says Atkinson. Or if IT employees find malware that came from a porn site and could endanger the network, they may say something — to the employee or to a manager. “But as we know, policies are enforced somewhat arbitrarily,” Atkinson says.

Barry Thompson, network services manager at ENE Systems, a $37-million energy management and HVAC controls company in Canton, Mass., says he has seen problems increase because of what he calls “bring your own connection.” People assume “that it’s their personal phone so they can do as they like,” he says. But they are using the office Wi-Fi network, which Thompson monitors. He can see every graphic that passes through the network. “If I notice pictures of naked people, I can click on it and find out who’s looking at that,” he says. When that happens, Thompson usually gives a warning on first offense. If it happens again, he brings in the employee’s supervisor.

It’s like the Wild West out there if it’s the employee’s own device. — Dipto Chakravarty, ThreatTrack Security

“It’s like the Wild West out there if it’s the employee’s own device,” says Dipto Chakravarty, executive vice president of engineering and products at ThreatTrack Security. Companies have a hard time enforcing their policies on BYOD devices, because it is, after all, the employee’s device.

Often, the “old boy network” kicks in. The user “is petrified that IT will see all these bad sites that the user has visited,” says Chakravarty. Employees admit they made a mistake and ask IT to please ignore the material. “IT doesn’t really want to see the dirty laundry, so they say, ‘Hey, no problem. I’ll just wipe it clean and you’re good to go,'” he says. “That’s the norm.”

The tendency to “cover for your buddies — guys have been doing that for time immemorial,” says Robert Weiss, senior vice president of clinical development with Elements Behavioral Health and a sex addiction expert. But there are social and ethical concerns for both the employee and for IT, says Weiss, co-author of the 2014 book, Closer Together, Further Apart: The Effect of Digital Technology on Parenting, Work and Relationships.

What happens, asks Weiss, when IT sees photos of naked children on someone’s phone, which could be child porn, or repeatedly removes malware from porn sites from the same user’s device, which could indicate an addiction? IT staffers are typically not well equipped to address criminal or addictive behaviors.

Weiss thinks there should be clear policies that indicate when IT needs to report such information to human resources, similar to policies about repeated drinking or signs of other addictions, and let HR take it from there. “The IT person should not be involved,” he says. “I would not want to put the IT person in the position of having to talk about sex with an employee that they don’t particularly know well.”

I would not want to put the IT person in the position of having to talk about sex with an employee that they don’t know well. — Robert Weiss, Elements Behavioral Health

At least one technical analyst, who has worked in IT support at a range of companies, thinks reporting such users to HR is taking it too far. Flagging child pornography is one thing, he says, but addiction? “I’m not going to HR about BYOD riddled with porn. It’s their device. As much as I love helping people, their personal porn habits, even at an addiction level, are not my problem. Unless it’s criminal, I don’t care.”

Protecting IT from users

The ideal fix is to create a corporate container to hold all business applications, including corporate email and Internet browsing.

And the best way to achieve that goal is with the emerging class of enterprise mobility management (EMM) technology, says Eric Ahlm, a research director at Gartner. “When properly configured, EMM solutions create a corporate container that provides OS-level security and isolates apps and data in the container from what’s outside,” explains Ahlm. The corporate container can encompass email applications, Web browsers, customer mobile applications and off-the-shelf mobile applications. Within that container, IT can create isolated data-sharing and -protection policies, or easily deploy more mobile apps, or remove them — all without touching the personal information outside of the container, he explains. “It makes all those issues go away.”

On the personnel management side of the equation, companies should be sure to update their acceptable use policies to include BYOD. ENE’s Thompson found that his company’s acceptable use policy did not mention personally owned devices. So last year, says Thompson, ENE amended the policy to specify that “any use of corporate resources or systems, regardless of ownership of the devices, obligates the user to comply with the corporate acceptable use policy.”


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

 

Posted in Tech | Tagged , , | Leave a comment

Virtual reality gains a small foothold in the enterprise

Prototypes and simulations based on virtual reality can save companies millions.

The rapid growth of the mobile sector has had an unexpected dividend – by bringing down the costs and improving the quality of motion sensors, screens, and processors it has helped usher in a new era of virtual reality technology.

Systems previously available only to largest manufacturers or to the military can now be put together with consumer-grade technology at a fraction of the price, and companies are already taking advantage of the opportunities.

When it comes to virtual reality, one of the biggest bangs for the buck is in virtual prototypes. Virtual models of buildings, oil tankers, factory floors, store shelves or cars can now be uploaded into a virtual environment and examined by safety inspectors, designers, engineers, customers and other stakeholders.

The Ford Motor Company, for example, has long been using virtual reality when it comes to prototypes and simulations, but the new wave of virtual reality technology is dramatically expanding its reach.

Ford’s Immersive Virtual Environment lab, one of several areas in which Ford uses virtual reality, for example, has recently added the Oculus Rift virtual reality headset to its virtual reality platforms.

It’s used in combination with a shell of a car where the seat, steering wheel, and other parts can be repositioned to match those of a prototype car.

“If you look at it, you’d think it was a very stripped-down vehicle,” says Elizabeth Baron, who heads up the lab. But when engineers sit down in the driver’s seat and put on virtual reality headsets, they’re virtually transported into the interior of the prototype.

Elizabeth Baron shows how Ford uses Oculus Rift.

“You have a gas pedal, brakes, steering wheel, a door, and when you’re touching stuff, it’s real,” Baron says. “But when you’re looking around, you’re seeing the virtual data. That’s where the Oculus is specially useful.”

The Oculus Rift is the head-mounted virtual reality display that ushered in the current age of virtual reality with a $2.4 million Kickstarter campaign in 2012, followed by a jaw-dropping $2 billion buyout by Facebook earlier this year.

The Oculus Rift hasn’t officially hit the market yet, but developer kits are available from the company for $350 each and more than 100,000 have already been sold. The device combines a high-resolution screen, motion sensors, and a set of lenses. The motion sensors track where the user is looking and the lenses stretch out the screen so it covers most of the user’s field of view. The result is a very convincing illusion that the wearer has been transported into a virtual world.

“I’m extremely excited about the developments in the headspace scene and the work Oculus has done to bring low cost, wide-field of view to the market,” Baron says. “I’m just over the moon about it. The good thing for Ford is, with our approach for using different display technologies, we’re already ready to take advantage of the developments that come out of the virtual headset space.”

Another virtual reality system is a CAVE (computer assisted virtual environment), which is a room with large screens on three walls and on the ceiling. Users wear stereoscopic glasses for a holodeck-like effect – life-size, 3D images of objects appear in the middle of the room, so that engineers can walk around and examine them.

Another system allows users to walk around inside a large open space while it tracks their position. “We can put an F-250 [super duty truck] into that environment and you can walk around it like it’s a life-sized vehicle,” Baron says. “It’s like an inspection tool for what we’re producing and what our customers might take delivery of. That’s a really important aspect in our product development process.”

A virtual environment allows engineers to dial up different lighting settings, to see how the exterior would look at noon on a hazy day, or in the evening or under mercury vapor lights. Virtual environments also help enable long-distance collaboration, she says.

“We also have a virtual space in Australia, and if they’re immersed and we’re immersed at the same time, we can see where they are in the virtual environment and we can talk to each other,” she says. “We can say, ‘Look at this, look at that.’”

And virtual reality allows the company to look at many more prototypes than would have been possible if they had to be actually built.

“There is no way we could build thousands of prototypes,” she says. “We would only be able to build a handful. But also, there is no way we could check in the physical world all the things we check in the virtual worlds. We can make intelligent decisions about our design, with respect to how we manufacture it, and that’s a huge time save and cost save.”

Ford is expanding its use of virtual reality, she adds. “We’re actually creating another virtual space here in Dearborn [Michigan] to handle the overflow,” she says. “We’re so packed. We can’t fit in what we can do in one day. It’s been shown to be so valuable.”

Ford also uses virtual reality for manufacturing assembly simulations, to help ensure the health and safety of workers, for training, and to study how drivers behave.

“We have driving simulations, another virtual reality application, where we’ll bring in people who haven’t slept all night and ask them to perform some tasks,” she says. “And then perform an analysis on how they respond versus someone who’s had their fresh cup of coffee and they’re bright and cheerful in the morning.”

Other manufacturing companies are also upgrading their virtual prototypes from simple 3D graphics on a monitor to fully immersive virtual reality systems such as those made possible by the Oculus Rift and similar devices.

Medical device companies, for example, are among the early adopters, says Jeremy Duimstra, a professor of user experience at University of California San Diego and CEO and creative director at San Diego-based MJD Interactive, which counts Disney, Red Bull, P&G and Titleist among its clients.

“Being able to virtually interact with a device in the design phase, without having to build physical objects … allows for more innovation,” he says.

Plus, there’s the cost savings of materials and manpower of physically mocking up hundreds of prototypes. “Build the product virtually, test it, iterate, and only build when you know it’s right,” he says.

Jeremy Duimstra
Environments that are physically dangerous for people are also ripe for going virtual.

“Our oil and gas clients are definitely interested in this space,” says Mary Hamilton, who heads up the digital experiences research and development group at Accenture. Immersive virtual reality allows people who might be in different locations to visit a difficult-to-reach facility, to get views such as X-rays or schematic views that might be impossible in real life, and enables low-risk, lower-cost training for new employees.

Marketing applications are also expanding, she says.
For example, low-cost head-mounted displays will allow retailers to replace their immersive CAVE environments – which can cost hundreds of thousands of dollars to set up. Companies can use the technology to have focus groups walk through virtual stores, interact with different shelf layouts, or even try out new products.

“It would significantly lower costs, allow companies to do more of this, and allow them to do it in multiple locations,” she says.

The second wave
One virtual reality wave has already come and gone, in the 1990s. Movies like “The Lawnmower Man,” devices like Nintendo’s Virtual Boy and virtual reality arcades made the technology hot, but by the time “The Matrix” came out at the end of the decade it was clear that virtual reality technology was too expensive and too bulky for widespread use. In addition, graphics quality was poor and high latency and poor head-tracking combined to make users nauseous.

As a result, virtual reality became limited to high-end, narrowly focused applications such as military simulations, movie special effects, and training and simulations in manufacturing, oil, and the medical industries, says Jacquelyn Ford Morie, formerly a virtual reality expert at the University of Southern California’s Institute for Creative Technologies. Virtual reality immersion therapy has been used for a decade now to treat Post Traumatic Stress Disorder, and to manage the pain of burn victims.

“Now we have this second wave of virtual reality,” says Morie. “The difference between then and now is that it’s affordable. Instead of a $30,000 head-mounted display, you now have a $300 head-mounted display.”

jacquelyn ford morie
Jacquelyn Ford Morie is founder and chief scientist at All These Worlds Inc., a Los Angeles-based virtual environment consulting and development firm.

The general population is also more used to technology than they were 20 years ago, she adds, and there are more companies creating content for the new virtual reality platforms. Her own company creates applications in virtual worlds for NASA and other enterprise clients.

“We’re doing things like making virtual worlds that will help astronauts on long-duration space flight missions,” she says.

Today, most enterprise virtual reality is internally focused, she says. That is likely to change as more of this technology gets into the hands of consumers, and she’s looking forward to working on consumer-focused projects.

“If everyone has a 3D head-mounted display, there’s no reason not to feed a preview of that new product,” she says. “Create emotionally evocative, 3D immersive ads, so all of a sudden they feel like they’re on the mountain, about to ski down with my new snowboard.”


 

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , | Leave a comment

Apple iPad Air 2 is thinner and speedier than its predecessors

Apple’s iPad Air 2 is thinner and lighter than its predecessor, and should be speedier as well, thanks to a new processor.

It also has improved camera and security features, as does the iPad Mini 3, Apple said Thursday during an event at it’s Cupertino, California, campus, unveiling the tablets at a time when the company’s dominance in that market has waned.

The iPad Air 2, which has a 9.7-inch screen, is 6.1 millimeter thick, which is 18 percent thinner than the iPad Air. The Air 2 offers 10 hours of battery life.

The tablet has the all-new A8X chip, which is a variant of the A8 chip in the iPhone 6 and iPhone 6 Plus. The chip is 40 percent faster and provides 2.5 times better graphics than the A7 chip in the iPad Air.

“We’re able to deliver console-level graphics in your hand,” said Phil Schiller, Apple senior vice president of worldwide marketing.

Other features include an 8-megapixel iSight rear camera and a FaceTime front camera. The iSight camera can take 1080p video that can be manipulated in multiple modes such as slow motion and time lapse.

Image and video manipulation tools such as Pixelmator and Replay will help users edit, repair and manipulate images, taking advantage of the faster graphics processor.

The iPad Mini 3 has a 7.9-inch screen, an A7 chip, a 5-megapixel iSight rear camera, a FaceTime camera and 802.11ac wireless.

Both tablets have the Touch ID fingerprint sensor, which lets users bypass passwords when logging into a smartphone or buying things online. The fingerprint technology is used with the Apple Pay payment system.

The iPad Air 2 is priced at US$499 for 16GB and Wi-Fi storage, $599 for 64GB and $699 for 128GB. A version of the tablet with cellular connectivity is $130 more.

The iPad Mini 3 is priced at $399 for 16GB, $499 for 64GB and $599 for 128GB.

Both tablets can be ordered now, with shipping set for next week.

The tablets are hitting the market at a time when Android tablet makers Samsung, Lenovo and Asus are gaining ground on Apple. Apple’s tablet shipments declined 9.3 percent during the second quarter of 2014 compared to the same quarter last year, while overall worldwide tablet shipments went up 11 percent, according to IDC.

Apple faces further challenges as more users opt for larger-screen smartphones and hybrid devices instead of tablets. The iPhone 6 Plus, which has a 5.5-inch screen, is off to a hot start, and could hurt iPad sales. And Google’s Nexus 9, the first 64-bit Android tablet, starts shipping next month.

IDC is projecting overall worldwide tablet shipments to grow by just 6.5 percent this year.

But Apple CEO Tim Cook put a—predictably—positive spin on the situation at the event, noting that the company has sold 225 million tablets.

“We’ve sold more iPads in the first four years than any product in our history,” Cook said.


 

Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Apple | Tagged | Leave a comment

Startup proposes fiber-based Glass Core as a bold rethink of data center networking

Software Defined Networking (SDN) challenges long held conventions, and newcomer Fiber Mountain wants to use the SDN momentum to leap frog forward and redefine the fundamental approach to data center switching while we’re at it. The promise: 1.5x to 2x the capacity for half the price.

How? By swapping out traditional top of rack and other data center switches with optical cross connects that are all software controlled. The resultant “Glass Core,” as the company calls it, provides “software-controlled fiber optic connectivity emulating the benefits of direct-attached connectivity from any port … to any other server, storage, switch, or router port across the entire data center, regardless of location and with near-zero latency.”

The privately funded company, headed by Founder and CEO M. H. Raza, whose career in networking includes stints at ADC Telecommunications, 3Com, Fujitsu BCS and General DataComm, announced its new approach at Interop in New York earlier this week. It’s a bold rethinking of basic data center infrastructure that you don’t see too often.

“Their value proposition changes some of the rules of the game,” says Rohit Mehra, vice resident of network infrastructure at IDC. “If they can get into some key accounts, they have a shot at gaining some mind share.”

Raza says the classic approach of networking data center servers always results in “punting everything up to the core” – from top of rack switches to end of row devices and then up to the core and back down to the destination. The layers add expense and latency, which Fiber Mountain wants to address with a family of products designed to avoid as much packet processing as possible by establishing what amounts to point-to-point fiber links between data center ports.

“I like to call it direct attached,” Raza says. “We create what we call Programmable Light Paths between a point in the network and any other point, so it is almost like a physical layer connection. I say almost because we do have an optical packet exchange in the middle that can switch light from one port to another.”

That central device is the company’s AllPath 4000-Series Optical Exchange, with 14 24-fiber MPO connectors, supporting up to 160×160 10G ports. A 10G port requires a fiber pair, and multiple 10G ports can be ganged together to support 40G or 100G requirements.

The 4000 Exchange is connected via fiber to any of the company’s top-of-rack devices, which are available in different configurations, and all of these devices run Fiber Mountain’s Alpine Orchestration System (AOS) software.

That allows the company’s homegrown AOS SDN controller, which supports OpenFlow APIs (but is otherwise proprietary), to control all of the components as one system. Delivered as a 1U appliance, the controller “knows where all the ports are, what they are connected to, and makes it possible to connect virtually any port to any other port,” Raza says. The controller “allows centralized configuration, control and topology discovery for the entire data center network,” the company reports, and allows for “administrator-definable Programmable Light Paths” between
How do the numbers work out? Raza uses a typical data center row of 10 racks of servers as the basis for comparison. The traditional approach;

Each rack typically has two top-of-rack switches for redundancy, each of which costs about $50,000 (so $100,000/rack, or $1 million per row of 10 racks).
Each row typically has two end-of-row switches that cost about $75,000 each (another $150,000)
Cabling is usually 5%-10% of the cost (10% of $1.15 million adds $115,000)
Total: $1.265 million

With the Fiber Mountain approach:
Each top-of-rack switch has capacity enough to support two racks, so a fully redundant system for a row of 10 racks is 10 switches, each of which cost $30,000. ($300,000).
The 4000 series core device set up at the end of an isle costs roughly $30,000 (and you need two, so $60,000).
Cabling is more expensive because of the fiber used, and while it wouldn’t probably be more than double the expense, for this exercise Raza says to use $300,000.

Total $660,000. About half, and that doesn’t include savings that would be realized by reducing demands on the legacy data center core now that you aren’t “punting everything up” there all the time.

What’s more, Raza says, “besides lower up front costs, we also promise great Opex savings because everything is under software control.”

No one, of course, rips out depreciated infrastructure to swap in untested gear, so how does the company stand a chance at gaining a foothold?

Incremental incursion.
Try us in one row, Raza says. Put in our top-of-rack switches and connect all the server fibers to that and the existing top-of-rack switch fibers to that, and connect our switches to one of our cores at the end of the isle. “Then, if you can get somewhere on fiber only, you can achieve that, or, if you need the legacy switch, you can shift traffic over to that,” he says.

Down the road, connect the end of isle Glass Core directly to other end of row switches, bypassing the legacy core altogether. The goal, Raza says, is to direct connect racks and start to take legacy switching out.

While he is impressed by what he sees, IDC’s Mehra says “the new paradigm comes with risks. What if it doesn’t scale? What if it doesn’t do what they promise? The question is, can they execute in the short term. I would give them six to 12 months to really prove themselves.”

Raza says he has four large New York-based companies considering the technology now, and expects his first deployment to be later this month (October 2014).


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , , , | Leave a comment

States worry about ability to hire IT security pros

States’ efforts to improve cybersecurity are being hindered by lack of money and people. States don’t have enough funding to keep up with the increasing sophistication of the threats, and can’t match private sector salaries, says a new study.

This just-released report by Deloitte and the National Association of State CIOs (NASCIO) about IT security in state government received responses from chief information security officers (CISOs) in 49 states. Of that number, nearly 60% believe there is a scarcity of qualified professionals willing to work in the public sector.

Nine in 10 respondents said the biggest challenge in attracting professionals “comes down to salary.”

But the problem of hiring IT security professionals isn’t limited to government, according to Jon Oltsik, an analyst at Enterprise Strategy Group (ESG).

In a survey earlier this year of about 300 security professionals by ESG, 65% said it is “somewhat difficult” to recruit and hire security professionals, and 18% said it was “extremely difficult.”

“The available pool of talent is not really increasing,” said Oltsik, who says that not enough is being done to attract people to study in this area.

Oltsik’s view is backed by a Rand study, released in June, which said shortages “complicate securing the nation’s networks and may leave the United State ill-prepared to carry out conflict in cyberspace.”

The National Security Agency is the country’s largest employer of cybersecurity professionals, and the Rand study found that 80% of hires are entry level, most with bachelor’s degrees. The NSA “has a very intensive internal schooling system, lasting as long as three years for some,” Rand reported.

Oltsik said if the states can’t hire senior people, they should “get the junior people and give them lots of opportunities to grow and train.” Security professionals are driven by a desire for knowledge, want to work with researchers and want opportunities to present their own work, he said.

Another way to help security efforts, said Oltsik, is to seek more integrated systems, instead of lot of one-off systems that require more people to work on them.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

Posted in Tech | Tagged , , , , , | Leave a comment