Gmail users urged to change passwords after apparent attack

Hackers compromised nearly 5M Gmail passwords

Security experts are urging Gmail users to change their passwords amid reports that hackers gained access to the credentials of 5 million users of the free email service. Some password combinations have been spotted on Russian cybercrime forums.

Peter Kruse, head of the eCrime unit at CSIS Security Group in Copenhagen, told Computerworld that most of the nearly 5 million stolen Gmail passwords are about three years old, but many are still legitimate and functioning.

He said that CSIS experts suspect that several hackers worked on an endpoint compromise to exploit vulnerable network protocols.

Google did not respond to a Computerworld request for comment but has told other news outlets that it has found no evidence that their systems have been compromised.

Google’s cloud-based email service is used by individuals as well as enterprises.

Russian media outlet RIA Novosti reported that hackers have stolen and published a database containing the Google account logins and passwords to a Bitcoin Security online forum.

The database reportedly contains 4.93 million Google accounts from English, Russian and Spanish users.

Kruse said the discovery of the hack comes just days after more than 4.6 million Russian-based Mail.ru accounts and 1.25 million Yandex e-mail boxes were reportedly compromised. Yandex is the largest Russian-based search engine.


 

MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

Posted in Google | Tagged , , | Leave a comment

OpenDNS Servers : Increase internet speed using OpenDNS

Broadband is considered as high speed generating internet connection. You might have not noticed but even Broadband is not able to serve you great internet speed. This is due to downloads or heavy applications such as torrent. Do you guys know the reason behind these fluctuations in speed; it is due to DNS Servers, popularly known as Domain Name Servers. The DNS is provided by your ISP. While suing heavy files, most of us have to suffer the poor internet speed. But there is a solution to this issue. You can solve the flaw by using Public DNS servers such as OpenDNS servers.

Increase internet speed using OpenDNS
OpenDNS servers or Google DNS are public platforms that allow user to enjoy excellent Internet speed. You must know about DNS Benchmark, it is a tool that allows you to check the best channel or DNSserver IP that can help in improvising internet speed. Let us study OpenDNS Servers today
Taking a step-by-step approach, we will today discuss OpenDNS Servers and by the end of the topic, you will get to know all about Open DNS and its related concepts.
What is OpenDNS Servers?

The service of Open DNS provides network security and expands the array of DNS. It adds the features such as phishing protection and optional content filtering that help your system protect itself from malicious attacks. OpenDNS servers are available in both, free and premium packages for organizations or individuals. This helps in loading pages at much efficient speed and also blocks every channel for all kind of malicious pages. Other features that you can enjoy with OpenDNS servers are:

Domain Blocking
DNS Security
Typo Correction
Botnet Protection

OpenDNS Servers
For every individual that is using default DNS that is provided by the ISP have the provision to switch to OpenDNS server. Google DNS is also quite resourceful and you must give it a try as well. OpenDNS server IP blocks all faulty files that come along-with while downloading. Usually we see that besides using Broadband, we try hard to open a web page but it takes a while to open, this problem can be solved by using open DNS server IP. Actually Open DNS have a big pool of DNS cache that helps internet to perform better.

Consider the example:
Whenever you type any domain address in your browser such as www.bloggeroutline.com, a request is sent to the IP address of searched site and hence page opens at your system. Sometimes, due to low cache size, DNS of your site is not able to fetch the page link. This is the only reason for websites showing error and not getting displayed on request sometimes. Open DNS servers help you in overcoming such errors.
OpenDNS are considered as faster than other platforms as their servers are located at all vital locations, plus it serves larger cache size.
For individual users, OpenDNS IP is available and sufficient for routinely usage of data. You can use free packages as these are good enough for regular home users. Below mentioned is the current open DNS IP that you can use to change DNS settings:

208.67.222.222
208.67.220.220

Google DNS IP addresses
IPv4: 8.8.8.8 and/or 8.8.4.4.
IPv6: 2001:4860:4860::8888 and/or 2001:4860:4860::8844

How to Change DNS setting on Mac OS

Below mentioned are the steps that can help you in changing DNS settings on Mac, take a look:

Go to System Preferences
Click at Network Icon
Unlock the pane by entering password (if necessary)
Click on Advanced Button
Click at DNS Tab
Click at + button and add DNS servers
Click Apply and then OK

How to change DNS Setting in Windows Base System

By following these steps, you can change DNS settings and add new servers.
Go the Control Panel.
Click Network and Internet, then Network and Sharing Center, and click Change adapter settings.
Select the connection for which you want to configure DNS. For example: Local Area Connection or Wireless Network Connection,
right click on your choice and select Properties.
Select the Networking tab. Under This connection uses the following items, select Internet Protocol Version 4 (TCP/IPv4) or
Internet Protocol Version 6 (TCP/IPv6) and then click Properties.
Click Advanced and select the DNS tab.
Click OK.

OpenDNS servers are best suited for today’s generation as all our routinely tasks highly depends on internet. It is hence the best medium to enjoy trouble-free usage of high performance internet.


MCTS Training, MCITP Trainnig

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

Posted in Tech | Tagged , , , , | Leave a comment

NTT tests 400Gbps optical technology for Internet backbone

One fiber in the core of the Internet could send the data equivalent of 600 DVDs in a second

NTT has successfully tested technology for optical Internet backbone connections that can transmit 400Gbps on a single wavelength.

Working with Fujitsu and NEC, the Japanese telecommunications giant verified the digital coherent optical transmission technology for distances of several thousand kilometers to 10,000 km. With it, a single wavelength of light can carry 400 Gbps, four times the capacity of previous systems. Each fiber can carry multiple wavelengths, and many fibers can be bundled into one cable.

The approach could more than double existing capacity to meet ever-increasing bandwidth demand, especially by heavy data users.

The technology could be used in the next generation of backbone links, which aggregate calls and data streams and send them over the high-capacity links that go across oceans and continents. The fiber in the network would stay the same, and only the equipment at either end would need to change.

While the current capacity on such links is up to 8Tbps (terabits per second) per fiber, the new technology would make a capacity of 24Tbps per fiber possible, according to NTT.

“As an example of the data size, 24 Tbps corresponds to sending information contained in 600 DVDs (4.7 GB per DVD) within a second,” an NTT spokesman wrote in an email. “The verification was done using algorithms which are ready to be implemented in CMOS circuits to show that these technologies are practically feasible.”

To compensate for distortions along the optical fiber, researchers from the consortium developed digital backward propagation signal processing with an optimized algorithm. The result of this and other research is that the amount of equipment required for transmissions over long distances can be reduced, meaning the network could consume less electricity.

“We are extremely excited to show this groundbreaking performance surpassing 100 Gbps coherent optical transmission systems,” Masahito Tomizawa, executive manager of consortium leader NTT Network Innovation Labs, wrote in an email. “This new technology maintains the stability and reliability of our current 100 Gbps solutions while at the same time dramatically improving performance.”

The consortium said it is taking steps toward commercialization of the technology on a global scale but would not say when that might happen.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

 

Posted in Tech | Tagged , , , , , , , | Leave a comment

How IT is creating the ‘smart’ workplace

By deploying technologies like Wi-Fi, Wikis and WebEx, IT is leading the charge as enterprises restructure for maximum collaboration.

From pharmaceutical companies to oil and gas firms, enterprises are breaking down silos and boosting the bottom line through the expanded use of collaboration tools and technologies. And IT is leading the charge.

IT executives are becoming collaboration architects who partner with human resources, facilities, and corporate communications to remove barriers that impede collaboration. These barriers range from physical workspaces to organizational practices and processes.

The goal of unified communications is to allow anybody to engage anybody else spontaneously through instant messaging, web conferencing and videoconferencing, regardless of level, role or region. Asynchronous collaboration tools including wikis, forums, emerging knowledge and project management systems and collaboration-as-a-service platforms that enable enterprise-wide information sharing and broad participation in decisions.

Konica Minolta’s structural changes are impacting the role of IT. As remote diagnosis increasingly replaces on-site service calls for the company’s printer business, service has become a function of IT. The company has eliminated outsourcing contracts and hired more IT people. Salespeople and IT are collaborating as customer conversations now include topics such as network security, enterprise content management, document management, and changing workplace practices.

As Konica Minolta Australia evolves from an equipment vendor to an IT services provider, the company is looking to acquire small IT services companies with specialized knowledge. This infusion of expertise and broader input into decisions may provide


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , , | Leave a comment

Fumble-free USB 3.1 connector will be in products by year end

USB Implementers Forum says final USB 3.1 specification will be published in July

The redesigned USB connector, belonging to the USB 3.1 specification, could be in laptops and mobile devices by the end of the year.

The appealing advances in the USB 3.1 technology include faster data transfer speeds and the user-friendly Type C connector, which connect to devices on either end. Since users won’t have to worry about plug orientation, they should be less likely to have trouble fitting cables into slots, which will be an improvement over USB connectors.

USB 3.1 also boosts data-transfer speeds and cables will ultimately be able to move data at a speed of 10Gbps (bits per second). That’s an improvement from the 5Gbps transfer rate for the current USB 3.0 connector, which is in most laptops that ship today. The faster speeds open usage of USB 3.1 connectors to monitors, high-definition TVs and other electronics.
MORE ON NETWORK WORLD: 13 awesome and scary things in near Earth space

The connector is “robust enough for laptops and tablets; slim enough for mobile phones,” said standards-setting organization USB Implementers Forum in an email statement. The Type-C connectors could replace the current micro-USB 2.0 ports in most of the latest smartphones and tablets, which have different ends and are widely used for recharging.

The Type-C USB 3.1 connector is as thin as micro-USB 2.0 connectors, which could also lead to the development of thinner and sleeker devices. Current USB 3.0 ports in laptops are larger, so thinner slots could lead to smaller products. The USB 3.1 connector will also replace micro-USB 3.0 plugs, which are larger and used in just a handful of devices such as the Samsung Galaxy Note 3.

The USB 3.1 specification is scheduled to be completed in July and products may be out by the end of the year, a USB-IF spokeswoman said in an email Wednesday.

The new cables will not work on existing USB ports, so the USB-IF is developing separate cables so existing products can connect to products with USB 3.1 ports. Users will have to purchase new devices to take advantage of end-to-end USB 3.1 connections.

USB 3.1 will be slower than Thunderbolt 2.0 connector technology, which is used in computers and peripherals, and can transfer data at 20Gbps. But USB has many advantages over Thunderbolt, including cheaper costs and a wide installation base. Peripherals based on Thunderbolt, which supports PCI-Express and DisplayPort protocols, are still priced at a premium.

Intel, which is leading Thunderbolt development and supports USB, has dismissed the idea of competition between both protocols. Intel has even suggested that USB could be technically used in the same wires alongside Thunderbolt, but that hasn’t happened yet.

USB-IF is talking up the new specification at the Intel Developer Forum this week in Shenzhen, China.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , , | Leave a comment

10 Hot Internet of Things Startups

As Internet connectivity gets embedded into every aspect of our lives, investors, entrepreneurs and engineers are rushing to cash in. Here are 10 hot startups that are poised to shape the future of the Internet of Things (IoT).

As Internet connectivity gets embedded into everything from baby monitors to industrial sensors, investors, entrepreneurs and engineers are rushing to cash in. According to Gartner, Internet of Things (IoT) vendors will earn more than $309 billion by 2020. However, most of those earnings will come from services.

Gartner also estimates that by 2020, the IoT will consist of 26 billion devices. All of those devices, Cisco believes will end up dominating the Internet by 2018. You read that right: In less time than it takes to earn a college degree (much less time these days), machines will communicate over the Internet a heck of a lot more than people do.

With the IoT space in full gold-rush mode, we evaluated more than 70 startups to find 10 that look poised to help shape the future of IoT.

Note: These 10 are listed in alphabetical order and are not ranked.

1. AdhereTech
Why they’re on this list: There are plenty of companies trying to cash in on IoT by tethering it to healthcare. Let’s call it the Internet of Health (IoH). What’s impressive about AdhereTech, though, is that it focuses on a discrete problem and knocks it out of the park with its solution. It’s simple and smart.

Prescription adherence — sticking to your prescribed medication regimen — is one of the biggest problems plaguing medicine. Current levels of adherence are as low as 40 percent for some medications. Poor adherence to appropriate medication therapy has been shown to result in complications, increased healthcare costs, and even death. Medication adherence for patients with chronic conditions, such as diabetes, hypertension, hyperlipidemia, asthma and depression, is an even more significant problem, often requiring intervention.

According to AdhereTech, of all medication-related hospital admissions in the United States, 33 to 69 percent are related to poor medication adherence. The resulting costs are approximately $100 billion annually, and as many as 125,000 deaths per year in the U.S. can be attributed to medication non-adherence.

AdhereTech’s pill bottle seeks to increase adherence and reduce the costs associated with missed or haphazard medication dosage. The bottle uses sensors to detect when one pill or one liquid milliliter of medication is removed from the bottle. If a patient hasn’t taken his/her medication, the service reminds them via phone call or text message, as well as with on-bottle lights and chimes. The company’s software also asks patients who skip doses why they got off schedule. In addition to helping people remember, AdhereTech aggregates data anonymously to give a clearer picture of patient adherence overall to pharmaceutical companies and medical practitioners.

Customers: AdhereTech has trials running with Boehringer Ingelheim for a TBD medication, The Walter Reed National Military Medical Center for type 2 diabetes medication and Weill Cornell Medical College for HIV medication.

Competitive Landscape: Vitality GlowCap is the most direct competitor for AdhereTech. Other less direct competitors include RXAnte, an analytics company that helps to identify patients most at risk for falling off their prescription regimen, and Proteus Digital Health, which puts tiny digestible sensors inside of pills to give doctors a clearer picture of patient compliance.

2. Chui
What they do: Combine facial recognition with advanced computer vision and machine learning techniques to turn faces into “universal keys.” Chui refers to its solution as “the world’s most intelligent doorbell.”

Why they’re on this list: Home and business security systems are expensive, generate tons of false alarms and really can’t be called “intelligent.”

Chui’s facial recognition technology replaces keys, passwords or codes, allowing you to disarm a security system with facial recognition. Chui emphasizes that our faces are unique, universal and nontransferable. Your features cannot be hacked, nor can they be spied on.

For businesses, or even homes with service people stopping by regularly, Chui also keeps track of who is coming and going, documenting visitors and time-stamping their visits.

Even better, the system also learns as it goes. If a person’s facial appearance changes over time, the system will learn this. It is also able to distinguish between identical twins, and it is able to recognize attempts to fool the system with pictures or videos.

The first application of this system is an “intelligent doorbell.”

You can use it to open your door to invited guests, such as a cleaning service or dog walker, right from your smartphone. You no longer need to hand out keys to service providers. Chui ensures you have complete control over who is entering your home through real-time notifications and the ability to engage in live conversations with visitors.

Customers: In less than four months, Chui says that it has acquired 311 customers. A company spokesperson notes that they have more than doubled their crowdfunding goal ($300,000), selling the devices at $199 per unit.

Competitive Landscape: Chui’s three major competitors are Goji, Skybell (formerly idoorcam) and Doorbot. However, none of the other smart doorbells utilize facial recognition, offering only video connections.

3. Enlighted
Funding: Enlighted’s most recent funding, a $20 million Series C round, closed in 2013. RockPort Capital Partners led the investment round and was joined by Draper Fisher Jurvetson, Kleiner Perkins Caufield & Byers and Intel Capital. This brings total funding to date to approximately $36 million.

Why they’re on this list: Lighting is an immense cost to building operators, as well as a large factor for employee comfort.

“Lighting consumes 25-40 percent of commercial office building electricity and is a major factor in worker comfort and productivity, yet more than ninety percent of existing buildings have little more than a wall switch,” said CEO Tushar Dave. “When asked, facility managers tend to identify cost and complexity as the key reasons why they have allowed such waste in their facilities.”

Enlighted has created “people-smart sensors” that gather real-time environmental data and analytics at each light fixture within a building, while networking those sensors in a way that delivers even more value to building owners/operators.

Enlighted takes a slightly different tack to the smart lighting challenge than many of its competitors. The key differentiator of Enlighted’s product suite is the fact that it centers a lot of its control intelligence in the “Enlighted Sensor” devices, which attach to new or existing LED, fluorescent, CFL, or HID light fixtures.

Unlike a typical wireless lighting network node, which tends to connect directly to lighting drivers (for LEDs) or ballasts (for fluorescent or HID lights), Enlighted’s sensors not only control lights, but also monitor light levels, temperature, occupancy and power consumption for the 100 square feet of floor space directly beneath each of them.

That means that each sensor both knows not only what its individual light is doing, but also what it should be doing. In other words, instead of relying on the vagaries of wireless networking architecture to carry data back to a central control platform, come up with preferred control options, and then send those commands back to light fixtures, Enlighted’s nodes can handle most tasks on their own.

In addition to creating innovative lighting sensors, Enlighted is also creating new IoT products within their Enlighted Labs, a testing ground for product ideas. One such product is the Occupancy Sensing App, currently being used at the Enlighted headquarters for potential larger-scale launch. The app detects heat and movement within conference rooms, so that employees can easily look on an app to see which conference rooms are occupied. This cuts down time employees constantly spend circling an office building looking for a space to hold a meeting.

Enlighted is also working on using its sensor data for other things, like integrating their data with building HVAC systems to fine-tune heating and cooling, or to respond to utility demand response calls to shed power use when the grid is under stress.

Customers: LinkedIn, the City of San Jose, Interface Global, Menlo Business Park and Agilent Technology.

Competitive Landscape: Enlighted will compete with Lutron, Watt Stopper, Redwood Systems, Daintree, Adura and Digital Lumens, among others.


MCTS Training, MCITP Trainnig
Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , , , | Leave a comment

Windows 10 getting put through its paces by 450,000 ‘highly active’ testers

More people are testing the future OS than any previous version, and the bug fixes are piling up.

Microsoft is enjoying a considerable effort on the part of testers for Windows 10, who are reporting a considerable number of bugs and helping push the new operating system along quickly.

Gabe Aul, engineering general manager for the Operating Systems Group at Microsoft, made his December blog update on Windows all about fixes and improvements, and there are a lot. He said that more 1.5 million registered Windows Insiders are banging away at Windows 10, about 450,000 of whom are considered “highly active.” This is much higher than any previous public beta for Windows.
122314 chart

Microsoft has been receiving feedback from the Windows Insider Program in the forms of both bug reports and suggestions for changes/additions/improvements. Aul said that so far, Microsoft has fixed almost 1,300 bugs that were reported by users in the program.

“My favorite recent example of the latter is a bug that would have been really tricky to catch with test automation or by other means: In certain circumstances (very rare) the OneDrive icon in File Explorer can be replaced by the Outlook icon! That’s the kind of fit and finish bug that real usage at scale by Windows Insiders helps us find and fix.”

The bug reports are through three builds: 9841, 9860, and 9879. Some remain unfixed but are known to Microsoft and simply aren’t urgent. And changes requested by users are making their way into the OS as well.

For example, we added the option to choose which folder is the default when opening File Explorer, which many of you requested. We also added the ability to turn off recent files and/or frequent folders in “Home,” and added a little animation/transition when opening the Start menu, which were also frequently requested.

He also talked about fixing numerous kernel-level bugs and causes of blue screen failures, which have been greatly reduced from earlier builds.

Aul ended his post by saying that users will see a few new big things and a lot of small improvements coming in future builds of Windows 10. Rumor has it there will be a new build in January and a wider public beta in February.


Best CompTIA A+ Training Video, CompTIA A+ Certification at certkingdom.com

 

Posted in Microsoft | Tagged , , , , | Leave a comment

Worst security breaches of the year 2014: Sony tops the list

Theft of credit card numbers from stores was the major trend in data breaches, signaling the maturity of for-profit cybercrime networks

As 2014 winds down, the breach of Sony Pictures Entertainment is clearly the biggest data breach of the year and among the most devastating to any corporation ever.

Attackers broke in and took whatever they wanted, exfiltrating gigabytes and gigabytes of documents, emails and even entire movies, apparently at will for months and months on end.

Posting the stolen data and the celebrity nature of much of it has resulted in a public relations nightmare for the company. It revealed snarky personal comments never meant to go public as well as personal information such as Social Security numbers and salaries and competitive information about projects in progress.

The scenario is any corporate IT security pro’s worst fear – being pwned and hung out to dry publicly. Add to that lawsuits being filed against Sony by former employees seeking damages they say they suffered because the company failed to adequately protect the data.

Whereas most breaches are carried out for profit – such as theft of credit card information – this attack was intended to hurt its victim as much as possible on multiple fronts and has been very successful.

Many of the big for-profit breaches involved compromises of the credit/debit card swiping machines at retail stores, among them Target, Home Depot, Neiman Marcus, Michael’s and PF Chang.

A common way the crooks got in was by infiltrating trusted business partners and stealing legitimate credentials for accessing the victims’ networks. Once inside, they moved from machine to machine until they reached the subnets containing point-of-sale machines, which they infected with scrapers to steal card numbers and expiration dates.

Sony’s woes dominate headlines about hacks, there were some other significant break-ins this year. Here are a few of them briefly described.

Sony
How they got in – Unknown. Speculation ranges from an attack launched in a Thailand hotel to an inside job.
How long they went undetected – Unknown.
How they were discovered – On Nov. 22 employee computers received messages threatening public distribution of stolen data and displays of skulls on their screens.

Target
The Target breach happened last year but the important details came out this year so it’s included here.
Data compromised – 40 million credit and debit cards, 70 million phone numbers, mailing addresses and email addresses.
How they got in – Hacking the credentials of a legitimate business associate, an HVAC company, to get on Target’s network, then installing malware on point-of-sale machines.
How long they went undetected – About two weeks.
How they were discovered – The Department of Justice told them about it, but anti-malware software flagged the problem as well.

Home Depot
Data compromised – As many as 56 million credit cards put at risk, 53 million email addresses
How they got in – Via a third-party vendor’s credentials followed up by exploiting an unpatched Windows flaw.
How long they went undetected – From April to September.
How they were discovered – The stores’ executives were told by bank and law-enforcement officials.
Goodwill Industries (C&K Systems)
Data compromised – 868,000 credit/debit card numbers.

How they got in – By infecting point of sales card-swipe machines after compromising the network of the operator of the machines. Two other unnamed clients of C&K Systems were also compromised.
How long they went undetected – 18 months.
How they were discovered – Federal officials and payment card investigators told them.

JP Morgan
Data compromised – Phone numbers and email addresses for 76 million households plus 7 million small businesses.
How long they went undetected – Three months
How they were discovered – Internal investigation as well as outside data about a massive stolen credit card ring.
How they got in: Criminals compromised the computer an employee with special privileges that was used both at work and at home.
Data compromised – An unconfirmed number of credit card numbers, but possibly as many as an estimated 7 million
How they got in – Undisclosed but point-of-sales systems were compromised
How long they went undetected – Nine months.
How they were discovered – The Secret Service told them about the breach

Neiman Marcus
Data compromised – 350,000 payment cards
How they got in – Uncertain but point of sales systems were compromised
How long they went undetected – Three months.
How they were discovered – Credit card processors warned about a possible breach and a consultant confirmed it.

Michaels
Data compromised – 2.6 million credit/debit cards
How they got in – Undisclosed but point-of-sale machines were infected
How long they went undetected – Eight months
How they were discovered – Undisclosed


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , , | Leave a comment

Hardware torpedoes IBM’s Q4 revenue

Sluggish sales of IBM mainframes and other hardware put a damper on the company’s latest quarterly earnings report

Still hampered by slow hardware sales, IBM reported a 5.5 percent decline in revenue for the fourth quarter, even as it managed to post a 6 percent gain in net income.

Because of the sluggish revenue, IBM senior management will forgo their bonuses, or “personal annual incentive payments,” for the year, said Ginni Rometty, IBM chairman, president and CEO, in a statement.

[ For quick, smart takes on the news you’ll be talking about, check out InfoWorld TechBrief — subscribe today. | Find out what topics and issues affect tech’s biggest names and news makers in the IDGE Insider CEO interview series. | Read Bill Snyder’s Tech’s Bottom Line blog for what the key business trends mean to you. ]
[ Simon Phipps tells it like it is: Why software patents are evil. | Read Bill Snyder’s Tech’s Bottom Line blog for what the key business trends mean to you. ]

IBM’s fourth-quarter revenue was $27.7 billion, compared with $29.3 billion in the fourth quarter of 2012, the company announced Tuesday. IBM’s revenue fell short of analysts’ expectation of $28.2 billion, an estimate provided by Thomson Reuters. Revenue for the entire year was $99.8 billion, compared with $104.5 billion in the year prior, a 4.6 percent decrease.

IBM’s fourth-quarter income was $6.2 billion, compared with $5.8 billion in the fourth quarter of 2012. For the year, IBM reported $16.5 billion in income, down 1 percent from $16.6 billion in the prior year.

Revenue from IBM’s Systems and Technology hardware segment was $4.3 billion, down 26 percent from the fourth quarter in 2013. For the year, Systems and Technology delivered $14.4 billion, a decrease of 18.7 percent from the full year 2012.

The services divisions produced so-so results for the company. Revenue from Global Technology Services was $9.9 billion for the quarter, down 3.6 percent from $10.3 billion the same quarter a year before. Revenue from the Global Business Services segment grew slightly, up 0.6 percent to $4.7 billion for the fourth quarter, which ended Dec. 31.

For the year, Global Technology Services revenue shrank to $38.5 billion, down 4.2 percent from $40 million the year before. Global Business Services revenue also shrank by 0.9 percent, to $18.4 billion from $18.6 billion a year ago.

Revenue from the software business grew modestly. For the fourth quarter of 2013, the software group logged $8.1 billion in revenue, a 2.8 percent increase from $7.9 billion in the same quarter a year ago. For the year, the IBM software group generated $26 billion in revenue, up 1.9 percent from $25.4 billion in 2012.

“Our software, services and financing businesses are all on solid ground, but in hardware, we’ve entered the back-end mainframe product cycle, and we are dealing with some challenges in other areas. These are impacting our overall results,” said Martin Schroeter, IBM chief financial officer, in a webcast to investors.

With hardware, IBM was plagued in a number of areas. System Z and mainframe sales were down because they are in-between product releases. Other areas of hardware are feeling the impact of “business model issues due to market shifts,” some of which is coming from pricing pressure from lower-cost hardware alternatives, Schroeter said.

System Z sales were down 37 percent, when compared to a very strong quarter a year ago. Sales of MIPS (Microprocessor without Interlocked Pipeline Stages) mainframe systems declined 26 percent, also compared to a very strong quarter a year ago. Sales of Power systems declined 31 percent. While the company continues to ship Power systems, the greater efficiency of the newer systems reduces the size of the systems being shipped, lowering revenue for IBM, Schroeter said. System X sales were down 15 percent.

Pure systems, a new offering introduced last year, provided one bright spot on the hardware side. IBM shipped more than 2,500 Pure systems in the past quarter, and 10,000 since launch.

Another area of concern for IBM has been sales in China, which declined by 23 percent, chiefly in hardware sales.

A large part of this decline came from a broad-reaching Chinese government economic reform initiative, which has stalled state agency IT purchases. This initiative also slowed sales in IBM’s last quarter as well.

“While there is more clarity in the overall plan, we continue to believe it will take some time for business in China to improve,” Schroeter said.

In contrast, revenue in Japan grew by 4 percent, and has grown for the past five quarters. Schroeter attributed this success to IBM’s capability to shifting market focus and investment to meet current needs in IT.

In the past few weeks IBM announced two major initiatives. The company plans to invest an additional $1.2 billion to beef up its cloud infrastructure. It has also launched a new business group focused on providing Watson-style cognitive computing capabilities to help organizations make better use of their large amounts of data.

The company is planning on both initiatives to lead to substantial business over time.

“We believe that data as a natural resource will drive demand going forward, and big data analytics will provide the basis for competitive differentiation,” Schroeter said.

Data analysis is now “nearly a $16 billion” annual business for the company, he said. Cloud business accounts for $4.4 billion in revenue for the company, of which $1.7 billion was delivered as a cloud service.

IBM continued to perform well for investors. This quarter, the company posted earnings per share of $5.73, a 12 percent increase over EPS of $5.13 for the fourth quarter of 2012. For the year, earnings were $14.94 per share compared with $14.37 per share in 2012, a 4 percent increase.

The company is still on track to reach $15 per share by 2015, Schroeter said.


Comptia A+ Training, Comptia A+ certification

Best IBM Certification Training and IBM Exams Training  and more IBM exams log in to Certkingdom.com

Posted in IBM | Tagged | Leave a comment

11 Cyber Monday tech deals that truly save you serious money

Real deals, not cyber scams
If you want to see how morally bankrupt the post-Thanksgiving shopping season has become, just poke around online during “Cyber Monday.” You’ll find many of our nation’s major retailers marking up their list prices to advertise “savings” that don’t actually exist, and pushing “limited-time” offers that are readily available elsewhere. But worry not; we’ve dug through these borderline scams to find 11 deals you should actually know about.

Motorola.com: Unlocked Moto X for $140 off
The 2014 Moto X is one of our favorite Android phones. You can customize it with different colors and textures (including real leather and wood), and it bucks the bloatware trend among Android handsets. Motorola’s also good about updating its software—the Moto X is already running Android 5.0 Lollipop. The $140 discount starts Monday at 12 noon Eastern time.

Why it’s a good deal: The discounted base price of $360 is killer for an unlocked “hero” phone, and AT&T or T-Mobile will give a discount on wireless service. [Link]

Best Buy: LG G3 for $1 on-contract
The LG G3 was this year’s sleeper hit among Android phones, and unquestionably the one to get if you value camera quality above all else. With laser-assisted auto-focus, the G3 lines up shots quickly and excels in low light, so you rarely have to call for a do-over.

Why it’s a good deal: Most carriers are still selling the G3 for its sticker price of $199 on contract. While that price will probably fall as the new year rolls around, it doesn’t get any better than a buck right now. [Link]

Microsoft Store: Acer Aspire E15 for $399
The Aspire E15 is a run-of-the-mill budget notebook, with an Intel Core i5 processor, 4GB of RAM, a 500GB hard drive and a built-in DVD player. But because it comes from the Microsoft Store, it has none of the trialware and bloatware that comes standard on laptops from other major retailers. That alone makes it worth a look.

Why it’s a good deal: This is one of the rare Cyber Monday laptop deals that packs Intel Core i5 power for $400. Just don’t expect miracles from the display and build quality. [Link]

Newegg: Samsung 500GB SSD with Far Cry 4 for $180
With many new PC games gobbling gigabytes by the dozen, you’re going to need a roomy solid state drive to run them at top speeds. Samsung’s 840 EVO SSD has a whopping 500GB of storage and respectable read/write speeds of 540Mbps and 520Mbps, respectively. There’s also a handy transfer tool for upgrading from a smaller drive.

Why it’s a good deal: Newegg has a bunch of storage deals right now (including a $50, 128GB SSD from Sandisk) but $180 is darned cheap for a 500GB drive. The free copy of Far Cry 4 (normally $60) is the cherry on top for your new PC gaming rig. [Link]

Walmart: PlayStation 4 bundle for $449
Console bundles are everywhere this holiday season, but Walmart’s $449 bundle will be hard to beat, especially for families. It includes the PlayStation 4 console, LittleBigPlanet 3, Lego Batman 3, your choice of another game, and a second controller.

Why it’s a good deal : The PS4 normally costs $400, and most other holiday bundles are throwing in a game or two for free. This bundle has three games and an extra controller, so you’re getting about $120 in value over other deals. [Link]

MacMall: 13-inch MacBook Pro with Retina Display for $1,030
Apple’s current MacBook Pros are over a year old now, but they’re still among the best professional-grade laptops you can buy. The discounted model has a dual-core Intel Core i5 processor, 4GB of RAM and 128GB of solid state storage, and it lasted nearly 11 hours in Macworld’s battery test.

Why it’s a good deal: You rarely see Apple products discounted by more than $100 on Black Friday or Cyber Monday, but MacMall’s MacBook Pro deal manages to be $270 off the sticker price. [Link]

Google Play: LG G Watch for $99, $50 of Store credit
The LG G Watch, one of the first wave of Android Wear smartwatches, was quickly upstaged by classier-looking wearables such as the Moto 360 and LG’s own G Watch R. Still, it does a decent job of showcasing how Android Wear works, and it’s practically an impulse buy for the curious at $99.

Why it’s a good deal : The $50 credit toward apps, videos and games from the Google Play Store effectively halves the G Watch’s price if you were planning to buy some content anyway. You can still get the $50 credit when paying full price for a G Watch R, Asus Zenwatch, Samsung Gear Live, Sony SmartWatch 3 or Nexus 9 tablet. [Link]

B&H: iMac with Retina Display for $2,299
Apple’s iMac with Retina Display is a fine piece of machinery, packing 14.7 million pixels into its 27-inch “5K” panel. B&H is knocking $200 off the base model, which includes a 3.5GHz quad-core Intel Core i5 processor, 8GB of RAM and 1TB of fusion drive storage.

Why it’s a good deal: You don’t often see big discounts on Apple products, especially brand-new ones. B&H’s discount doesn’t make the Retina display iMac cheap by any means—rather, a slightly easier splurge. [Link]

Microsoft Store: $100 to $150 off the Surface Pro 3
The Surface Pro 3 is a shining example of what a high-end Windows machine can be, weighing as little as an 11-inch MacBook Air but with a taller, higher-res touchscreen. Detach the keyboard cover, and you have a 1.7-pound tablet with a pen for sketching and a kickstand. Microsoft is knocking $100 off the price for Core i5 models, and $150 off for Core i7 models.

Why it’s a good deal: The discount brings the base price to $1,030 with 128GB of storage and 4GB of RAM. That’s just $30 more than a 13-inch MacBook Pro with similar specs. If you missed the same deal on Black Friday, now’s the time to pull the trigger. [Link]

Staples: Acer Chromebook for $150
Like all other Chromebooks, this one can’t run traditional Windows software such as Office and iTunes. But Acer’s CB3-111-C670 Chromebook gets you online with a full mouse and keyboard at your disposal. It has an 11.6-inch, 1366×768 display, Celeron processor and 2GB of RAM, which should be all you need for basic browsing.

Why it’s a good deal: Normally, Asus’ competing 11-inch Chromebook is the slightly better buy, but these are two very similar machines. The $50 discount on the Acer is just enough to give it the edge. [Link]

Dell: 22-inch 1080p monitor for $99
The holiday shopping season can be a good time to upgrade aging computer monitors, and Dell’s deal in particular is worth a look. The S2240L on sale for $99 has a 21.5-inch display, narrow bezels and a choice of VGA or HDMI input. The screen also tilts from 5 degrees down to 21 degrees up.

Why it’s a good deal: You don’t typically see 22-inch monitors of decent quality cracking the $100 barrier, so multi-monitor users may want to think about stocking up. You’ll have to move quickly, though, as Dell says it will have limited quantities starting at 8 a.m. Eastern. [Link]


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged | Leave a comment

The top infosec issues of 2014

Security experts spot the trends of the year almost past
There is still time for any list of the “top information security issues of 2014” to be rendered obsolete. The holiday shopping season is just getting into high gear, after all, and everybody knows it was from late November to mid-December last year when the catastrophic Target breach occurred.

But this list is about more than attacks and breaches – it is about broader infosec issues or trends that are likely to shape the future of the industry.
MORE ON NETWORK WORLD: Free security tools you should try

Several experts offered CSO some thoughts on their top picks, what can be learned from them and whether that knowledge can help organizations improve their security posture in the coming year.

Cyber threats trump terrorism
An Associated Press story this past week on the federal government’s $10-billion annual effort to secure its multiple agencies noted, almost in passing, that, “intelligence officials say cybersecurity now trumps terrorism as the No. 1 threat to the U.S.”

That makes sense to Sarah Isaacs, managing partner at Conventus. While cyber attacks have been expanding and evolving for decades, Isaacs said there has been a qualitative change: It is not just criminals trying to steal money – it is nation states using it for espionage and even military advantage.

Be sure not to miss:
Free security tools you should try

In May, “the Department of Justice indicted five members of China’s People’s Liberation Army on felony hacking charges for stealing industrial secrets,” she said. “We’ve never seen that before.”

Then in September, “NATO agreed that a cyber-attack could trigger a military event,” she said. “This is about more than protecting credit cards. This is escalating to new levels.”
“Everyone is oversharing everything. The threats are broad and potentially catastrophic.”
sarah isaacs

Sarah Isaacs, managing partner, Conventus
Author, security guru and Co3 Systems CTO Bruce Schneier, would likely agree. In a recent blog post, he wrote that increasingly sophisticated attacks, especially advanced persistent threats (APT) that are not about financial theft, are coming from, “a new sort of attacker, which requires a new threat model.”

There is evidence of that in a recent study by ISACA on APTs. CEO Rob Clyde said 92% of respondents, “feel APTs are a serious threat and have the ability to impact national security and economic stability.”

Clouds – private, public and hybrid – are not new. But the steady increase in the use of cloud storage services is posing larger risks to businesses.

Schneier, in his blog post, said the continuing migration to clouds means, “we’ve lost control of our computing environment. More of our data is held in the cloud by other companies …”

While experts say cloud service providers frequently provide better security, that may not be true of so-called “shadow” or “rogue” use of clouds by workers who believe that is an easier way to do their jobs than going through IT.

Internet of Everything (IoE) – a hacker frontier
The Internet of Things (IoT) is so last year. It is now the IoE. Smart, embedded devices in homes, cars, electronics, machines, and worn by individuals are now mainstream. They already number in the billions, and estimates of their growth range from 50 billion by 2020 to more than a trillion within the next decade.

And that means a growing tsunami of data flowing to the Internet, where it can be sold for marketing purposes or stolen for more malicious means.

Isaacs, who says she is among those who uses an exercise wearable, said she used “dummy data” to register it. “So nobody knows it’s my data,” she said. “It can’t be mapped directly to me.”

In general, however, she said, “everyone is oversharing everything. The threats are broad and potentially catastrophic. I’m very nervous about the smart cars I see.

There does seem to be an increasing awareness of the privacy implications of smart cars. The AP reported this week that 19 automakers that make most of the cars and trucks sold in the U.S. signed on to a set of principles, delivered to the Federal Trade Commission (FTC), that seek to reassure vehicle owners that the information gathered by those vehicles, “won’t be handed over to authorities without a court order, sold to insurance companies or used to bombard them with ads … without their permission.”

The vulnerabilities of “smart” devices to hacking have been demonstrated numerous times, prompting Phil Montgomery, senior vice president of Identiv to call for, “a more regimented standards-based security approach that relies less on outdates processes around username/password technology and more on stronger forms of authentication.”

No parties for third parties
This was the year that the risks of breaches through third-party contractors made it into mainstream consciousness. The Target breach, which exposed 70 million records, was just one of many that came through outside vendors.

Regulatory agencies are trying to maintain that awareness. Stephen Orfei, the new general manager of the Payment Card Industry Security Standards Council (PCI SSC) noted in a recent interview that, “security is only as good as your weakest link – which means the security practices of your business partners should be as high a priority as the integrity of your own systems.”
“Employee negligence was at an all-time high in 2014.”
christine marciano

Christine Marciano, president, Cyber Data-Risk Managers
Christine Marciano, president of Cyber Data-Risk Managers, said that in addition to vetting vendors for rigorous security standards, companies should, “require their vendors to carry and purchase cyber/data breach insurance, to indemnify them for any costs associated with a data breach caused by the vendor’s negligence.”

The porous, sometimes malicious, human OS
While third parties may be a weak link in the security chain, that is less likely due to technology and more due to the human factor.

It was former National Security Agency contractor Edward Snowden who brought the risks of malicious insiders to international attention in 2013, but the danger to enterprises can be just as great from loyal insiders who are simply “clueless or careless,” and fall for social engineering scams.

Joseph Loomis, founder and CEO of CyberSponse, said he is, “sure there are major companies out there with little controls over their employees and their access rights. Who is watching who and what they’re doing?”

It is also about employees controlling themselves when presented with ever-more persuasive social engineering attacks.

The federal government reported earlier this year that 63 percent of the breaches of its systems in 2013 were due to human error.

According to Marciano, “employee negligence was at an all-time high in 2014,” with the problems ranging from, “failure to perform routine security procedures to lack of security awareness, routine mistakes and misconduct.”

Eldon Sprickerhoff, cofounder and chief security strategist at eSentire, noted that, “phishing emails are getting better and better. I’ve seen some that were so well targeted, so well done that I could not tell the difference.”

And it is not just the average worker who is a problem. Identity Finder CEO Todd Feinman said the problem goes all the way to the top. “Many executives don’t know where their sensitive data is so they don’t know how to protect it,” he said.

Ubiquitous BYOD
While BYOD is now mainstream in the workplace, Isaacs calls the increased focus on mobile computing, “very scary, and it’s going to get even worse.”

BYOD is now bringing, “extremely unreliable business applications inside the walls of corporations,” she said. “There are a lot of software vulnerabilities. Every app that is free or 99 cents, probably doesn’t have great level of security. And people don’t install patches either.”

According to Clyde, “there are now many times more mobile devices than PCs in the world. In fact, in many regions of the world, mobile devices are the only way most users connect to the Internet,” yet security remains a relative afterthought.

ISACA found that, “fewer than half (45%) have changed an online password or PIN code.

And now, connected wearable devices (BYOW) are becoming common in the workplace, yet, “a majority of professionals say their BYOD policy does not address wearable tech, and some do not even have a BYOD policy,” Clyde said.

The age of Incident Response (IR)
All of the above issues have led to an increased focus on IR. According to Schneier, this is not just the year but the decade of IR, following a decade of protection products and another of detection products.

In his blog post, he cited three trends: More data held in the cloud and more networks outsourced; more APTs by nation states and; a continuing lack of investment in protection and detection, leaving the bulk of the burden on response.
“Incident Response is, ‘the hardest job in security’.”
tom bain

Tom Bain, vice president, CounterTack
But IR has been more on everybody’s lips in 2014 than even a couple of years ago. The mantra of security experts is that it is not a matter of if, but when, an organization will be breached, and that an effective IR plan (combined with detection) can make attacks more of a nuisance than a disaster.

Getting IR right is crucial, but Tom Bain, vice president of CounterTack, calls it, “the hardest job in security. You can have all the technology in place to detect, prevent and analyze, but if your workflow is broken, or the team is so inundated with incident investigation, you are still vulnerable,” he said.

More regulation, please

An industry that generally decries government regulation – retail – is now singing the opposite tune when it comes to cyber security.

A Nov. 6 letter signed by 44 state and national organizations representing retailers, addressed to the leaders of both houses of Congress, called for, “a single federal law applying to all breached entities (to) ensure clear, concise and consistent notices to all affected consumers regardless of where they live or where the breach occurs.”

Sprickerhoff said such a law would be, “a good first step. There are 38 states with different definitions of what is a breach, so things are getting a bit out of hand,” he said. “If you had unifying description of what needs to be done, that’s not a bad thing.”

But, of course, notification is not the same as improving security. And there are limits to what regulation can accomplish in that area.
“I would prefer that organizations focus on results or outputs, like what was the time from detection to containment.”
richard bejtlich

Richard Bejtlich, chief security strategist, FireEye
“I worry that ‘compliance with frameworks’ attracts a lot of attention,” said Richard Bejtlich, chief security strategist at FireEye. “I would prefer that organizations focus on results or outputs, like what was the time from detection to containment?

“Until organizations track those metrics, based on results, they will not really know if their security posture is improving,” he said.

What to do?
There are, of course, no magic bullets in security. Isaacs said, noting that it’s almost impossible to say what is the biggest threat. “I heard a speech where it was described as, “death by a thousand cuts,” she said.

But experts do have suggestions. Sprickerhoff said more training is crucial, not just the security awareness of employees, but the next generation of IT security experts.

“I don’t think it’s ever been harder to find good people in IT security,” he said. “There’s not much in course work at the college level.”

Eyal Firstenberg, vice president research, LightCyber, said improving security is going to take a combination of technology and training.

“There is a need for fast and accurate alerts and notifications, which ultimately determine the outcome of these cyber engagements,” he said, but added that, “organizations need more professional diagnosticians on staff who are trained to know what threats are real and need to be addressed, and which ones aren’t.”

Ashley Hernandez, an instructor for Guidance Software, calls for more communication among organizations. “Security professionals need to have a way to share intelligence about patterns or attack types to others in their industry or trusted security groups,” she said.

Clyde notes that ISACA, “has a number of programs, from risk governance frameworks like COBIT 5 to the Cybersecurity Nexus (CSX), to ensure cybersecurity professionals have the skills they need to defend enterprises from the plethora of threats.”

Finally, Loomis offers a short list:
Improve procurement processes. “It takes too long to buy new tools,” he said.
Start educating your staff on what the DHS and NIST Frameworks really are. Read the MITRE book on the 10 strategies to a world-class SOC.
Stop believing the marketing and get real-world feedback on tools. “Security has put a lot of money into marketing, but that doesn’t mean the solution is right for the organization,” he said.
Run simulations. “When was the last time a company ran a real cyber drill?” he asked.
Stop following paper policy, “Militarizing your team, running drills, making it second nature is what will help the response process, not following a check list,” he said.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Tech | Tagged , , , , , , , , , , | Leave a comment

Room to grow: Tips for data center capacity planning

Capacity planning needs to provide answers to two questions: What are you going to need to buy in the coming year? And when are you going to need to buy it?

To answer those questions, you need to know the following information:
Current usage: Which components can influence service capacity? How much of each do you use at the moment
Normal growth: What is the expected growth rate of the service, without the influence of any specific business or marketing events? Sometimes this is called organic growth.
Planned growth: Which business or marketing events are planned, when will they occur, and what is the anticipated growth due to each of these events?
Headroom: Which kind of short-term usage spikes does your service encounter? Are there any particular events in the coming year, such as the Olympics or an election, that are expected to cause a usage spike? How much spare capacity do you need to handle these spikes gracefully? Headroom is usually specified as a percentage of current capacity.
Timetable: For each component, what is the lead time from ordering to delivery, and from delivery until it is in service? Are there specific constraints for bringing new capacity into service, such as change windows?

From that information, you can calculate the amount of capacity you expect to need for each resource by the end of the following year with a simple formula:
Future Resources = Current Usage x (1 + Normal Growth + Planned Growth) + Headroom
You can then calculate for each resource the additional capacity that you need to purchase:

Additional Resources = Future Resources ñ Current Resources
Perform this calculation for each resource, whether or not you think you will need more capacity. It is okay to reach the conclusion that you don’t need any more network bandwidth in the coming year. It is not okay to be taken by surprise and run out of network bandwidth because you didn’t consider it in your capacity planning. For shared resources, the data from many teams will need to be combined to determine whether more capacity is needed.
Current usage

Before you can consider buying additional equipment, you need to understand what you currently have available and how much of it you are using. Before you can assess what you have, you need a complete list of all the things that are required to provide the service. If you forget something, it won’t be included in your capacity planning, and you may run out of that one thing later, and as a result be unable to grow the service as quickly as you need.
What to track

If you are providing Internet based services, the two most obvious things needed are some machines to provide the service and a connection to the Internet. Some machines may be generic machines that are later customized to perform given tasks, whereas others may be specialized appliances.

Going deeper into these items, machines have CPUs, caches, RAM, storage and network. Connecting to the Internet requires a local network, routers, switches and a connection to at least one ISP. Going deeper still, network cards, routers, switches, cables and storage devices all have bandwidth limitations. Some appliances may have higher-end network cards that need special cabling and interfaces on the network gear. All networked devices need IP addresses. These are all resources that need to be tracked.

Taking one step back, all devices run some sort of operating system, and some run additional software. The operating systems and software may require licenses and maintenance contracts. Data and configuration information on the devices may need backing up to yet more systems. Stepping even farther back, machines need to be installed in a data center that meets their power and environment needs. The number and type of racks in the datacenter, the power and cooling capacity and the available floor space all need to be tracked. Data centers may provide additional per-machine services, such as console service. For companies that have multiple datacenters and points of presence, there may be links between those sites that also have capacity limits. These are all additional resources to track.

Outside vendors may provide some services. The contracts covering those services specify cost or capacity limits. To make sure that you have covered every possible aspect, talk to people in every department, and find out what they do and how it relates to the service. For everything that relates to the services, you need to understand what the limits are, how you can track them and how you can measure how much of the available capacity is used.
How much do you have

There is no substitute for a good up-to-date inventory database for keeping track of your assets. The inventory database should be kept up to date by making it a core component in the ordering, provisioning and decommissioning processes. An up-to-date inventory system gives you the data you need to find out how much of each resource you have. It should also be used to track the software license and maintenance contract inventory, and the contracted amount of resources that are available from third parties.

Using a limited number of standard machine configurations and having a set of standard appliances, storage systems, routers and switches makes it easier to map the number of devices to the lower-level resources, such as CPU and RAM, that they provide. Next: How much are you using now?

Terms to know
QPS: Queries per second. Usually how many web hits or API calls received per second.
Active Users: The number of users who have accessed the service in the specified timeframe.
MAU: Monthly active users. The number of users who have accessed the service in the last month.
Engagement: How many times on average an active user performs a particular transaction.
Primary resource: The one system-level resource that is the main limiting factor for the service.
Capacity limit: The point at which performance starts to degrade rapidly or become unpredictable.
Core driver: A factor that strongly drives demand for a primary resource.
Time series: A sequence of data points measured at equally spaced time intervals. For example, data from monitoring systems.

How much are you using now
Identify the limiting resources for each service. Your monitoring system is likely already collecting resource use data for CPU, RAM, storage and bandwidth. Typically it collects this data at a higher frequency than required for capacity planning. A summarization or statistical sample may be sufficient for planning purposes and will generally simplify calculations. Combining this data with the data from the inventory system will show how much spare capacity you currently have.

Tracking everything in the inventory database and using a limited set of standard hardware configurations also makes it easy to specify how much space, power, cooling and other data center resources are used per device. With all of that data entered into the inventory system, you can automatically generate the data-center utilization rate.
Normal growth

The monitoring system directly provides data on current usage and current capacity. It can also supply the normal growth rate for the preceding years. Look for any noticeable step changes in usage, and see if these correspond to a particular event, such as the roll-out of a new product or a special marketing drive. If the offset due to that event persists for the rest of the year, calculate the change and subtract it from subsequent data to avoid including this event-driven change in the normal growth calculation. Plot the data from as many years as possible on a graph, to determine if the normal growth rate is linear or follows some other trend.

Planned growth
The second step is estimating additional growth due to marketing and business events, such as new product launches or new features. For example, the marketing department may be planning a major campaign in May that it predicts will increase the customer base by 20 to 25 percent. Or perhaps a new product is scheduled to launch in August that relies on three existing services and is expected to increase the load on each of those by 10 percent at launch, increasing to 30 percent by the end of the year. Use the data from any changes detected in the first step to validate the assumptions about expected growth.

Headroom
Headroom is the amount of excess capacity that is considered routine. Any service will have usage spikes or edge conditions that require extended resource usage occasionally. To prevent these edge conditions from triggering outages, spare resources must be routinely available. How much headroom is needed for any given service is a business decision. Since excess capacity is largely unused capacity, by its very nature it represents potentially wasted investment. Thus a financially responsible company wants to balance the potential for service interruption with the desire to conserve financial resources.

Your monitoring data should be picking up these resource spikes and providing hard statistical data on when, where and how often they occur. Data on outages and postmortem reports are also key in determining reasonable headroom.

Another component in determining how much headroom is needed is the amount of time it takes to have additional resources deployed into production from the moment that someone realizes that additional resources are required. If it takes three months to make new resources available, then you need to have more headroom available than if it takes two weeks or one month. At a minimum, you need sufficient headroom to allow for the expected growth during that time period.

Resiliency

Reliable services also need additional capacity to meet their SLAs. The additional capacity allows for some components to fail, without the end users experiencing an outage or service degradation. The additional capacity needs to be in a different failure domain; otherwise, a single outage could take down both the primary machines and the spare capacity that should be available to take over the load.

Failure domains also should be considered at a large scale, typically at the data-center level. For example, facility-wide maintenance work on the power systems requires the entire building to be shut down. If an entire datacenter is offine, the service must be able to smoothly run from the other data centers with no capacity problems. Spreading the service capacity across many failure domains reduces the additional capacity required for handling the resiliency requirements, which is the most cost-effective way to provide this extra capacity. For example, if a service runs in one data center, a second data center is required to provide the additional capacity, about 50 percent. If a service runs in nine data centers, a tenth is required to provide the additional capacity; this configuration requires only 10 percent additional capacity.

The gold standard is to provide enough capacity for two data centers to be down at the same time. This permits one to be down for planned maintenance while the organization remains prepared for another data center going down unexpectedly.

Timetable
Most companies plan their budgets annually, with expenditures split into quarters. Based on your expected normal growth and planned growth bursts, you can map out when you need the resources to be available. Working backward from that date, you need to figure out how long it takes from “go” until the resources are available.

How long does it take for purchase orders to be approved and sent to the vendor? How long does it take from receipt of a purchase order until the vendor has delivered the goods? How long does it take from delivery until the resources are available? Are there specific tests that need to be performed before the equipment can be installed? Are there specific change windows that you need to aim for to turn on the extra capacity? Once the additional capacity is turned on, how long does it take to reconfigure the services to make use of it? Using this information, you can provide an expenditures timetable.

Physical services generally have a longer lead time than virtual services. Part of the popularity of IaaS and PaaS offerings such as Amazonís EC2 and Elastic Storage are that newly requested resources have virtually instant delivery time.

It is always cost-effective to reduce resource delivery time because it means we are paying for less excess capacity to cover resource delivery time. This is a place where automation that prepares newly acquired resources for use has immediate value.

Advanced capacity planning

Large, high-growth environments such as popular Internet services require a different approach to capacity planning. Standard enterprise-style capacity planning techniques are often insufficient. The customer base may change rapidly in ways that are hard to predict, requiring deeper and more frequent statistical analysis of the service monitoring data to detect significant changes in usage trends more quickly. This kind of capacity planning requires deeper technical knowledge. Capacity planners will need to be familiar with concepts such as QPS, active users, engagement, primary resources, capacity limit and core drivers.

Additional math terms
Correlation coefficient: Describes how strongly measurements for different data sources resemble each other.
Moving average: A series of averages, each of which is taken across a short time interval (window), rather than across the whole data set.

Regression analysis: A statistical method for analyzing relationships between different data sources to determine how well they correlate, and to predict changes in one based on changes in another.

EMA: Exponential moving average. It applies a weight to each data point in the window, with the weight decreasing exponentially for older data points.

MACD: Moving average convergence/divergence. An indicator used to spot changes in strength, direction and momentum of a metric. It measures the difference between an EMA with a short window and an EMA with a long window.

Zero line crossover: A crossing of the MACD line through zero happens when there is no difference between the short and long EMAs. A move from positive to negative shows a downward trend in the data, and a move from negative to positive shows an upward trend.

MACD signal line: An EMA of the MACD measurement.

Signal line crossover: The MACD line crossing over the signal line indicates that the trend in the data is about to accelerate in the direction of the crossover. It is an indicator of momentum.


Cisco CCNA Training, Cisco CCNA Certification

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

Posted in Tech | Tagged , , , | Leave a comment

HP talks cloud delivery options, the importance of OpenStack, how it competes on price

An in-depth conversation with Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, about where Helion fits in, cloud consumption models and coming change.

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud, brings an interesting perspective to his job given his former role as General Manager of Product Management for Windows Azure, Microsoft’s cloud platform. Network World Editor in Chief John Dix and Senior Editor Brandon Butler got Hilf on the line for his big picture view of the importance of OpenStack, why HP recently acquired Eucalyptus, the impetus to compete on price, and the various cloud delivery options customers are pursuing.

How do you position Helion and where does it fit into the market?

Helion is our brand name for our cloud product portfolio which allows customers to deploy in any cloud context, be it a private cloud or a public or a hosted cloud environment. The applications and data and virtual machines that are going to ride on top of that cloud infrastructure can behave consistently across those different environments.
Bill Hilf, HP Cloud

Bill Hilf, Senior Vice President of Product and Service Management for HP Cloud

Enterprises are really struggling trying to do the all-in-one cloud model. But they don’t only use a single operating system or database or management tool, so we believe they will need to create a hybrid cloud environment. It’s not so much because they want to, it’s because they need to given the reality of their existing IT environments.

And what is fundamentally different with our approach is we’re building a composable product portfolio so if a customer wants to have only, let’s say, an application platform or only an infrastructure as a service platform, or wants to bring existing hardware, be it HP or non-HP, into a cloud environment, we need to have something that is composable and flexible.

That led us to probably the most important design decision we made, which was to build this product portfolio with a deep spine of open-source technologies. So we have OpenStack at the core of our IaaS layer and Cloud Foundry at the core of our development platform, but it’s not limited to that. We also support a wide range of open source tools, different types of application technologies, different databases and multiple languages. Really our core DNA is building around open source, which means less vendor lock-in and more flexibility for enterprise customers.

We just started to ship the first production-ready GA version of the Helion OpenStack distribution and Helion development platform which we’ve been working on for the past year and a half, and there are a number of ways customers can pick it up. There is a community version users can download and play with for free, they can buy it as stand-alone software to run on their own gear, they can buy it pre-integrated with HP solutions, or they may consume everything as a service. The latter doesn’t have to be a public cloud. It might be a hosted environment inside an enterprise so the customer can consume everything internally to meet regulatory requirements or policies.

So that’s how it will manifest. Customers will have a choice of different cloud models.

So a customer could have you build a cloud within their organization and run it for them as a service?

Yes. So customers might say, “I want all the benefits of a cloud, the speed, the economics, the self-service, but I want it in my data center and I want you to fully manage it, either remotely or in my environment.” That’s particularly appealing to large enterprises and large government agencies. That model is coming up again and again, and there are lots of different terms for it. You can call it managed private clouds, or a cloud-enabled hosting environment, but it’s essentially what you said.

The capital expense is yours and the customer just pays a service fee?

There are all sorts of ways customers want the mathematics to work. Sometimes they’ll want to be an internal cloud broker, providing services to internal customers. We have a big media customer doing this. They have an internal portal that says, “Hey, do you want compute or storage or networking?” And the internal end user has no idea what is actually providing that. Behind the scenes, based on the requirements and the price point and the constraints the end user describes, they can deliver the services either from their Helion OpenStack private cloud or, in some cases, they go out to a public cloud.

So, for example, if a customer wants extreme commodity storage pricing and they have very few constraints on how that data is stored or where, this internal broker might go back with AWS, but it’s presented to the internal customer just as a storage resource. That’s a really common pattern right now. We call it ‘internal service providers’ but it’s kind of cloud brokering.

Can you describe the difference between Helion OpenStack and the Helion Development Platform?

Helion OpenStack is a distribution of OpenStack built around the current tree of Juno. We don’t go in and swap out core components for HP proprietary stuff. We take the core of OpenStack and then do a whole bunch of work to make it easier to install, patch and configure, because that’s where a lot of the pain points are right now in OpenStack. We also do a lot of security work on it and then run it at very large scale in the HP public cloud to test for reliability. We learn a lot from running OpenStack in a large public cloud environment.

Above that we have the Helion Development Platform, which is a PaaS layer, but think of it as using Cloud Foundry as the host, or the run time, for applications. So it supports all these different languages and you can publish your Java app or node.js app or Python app or Ruby app into that full application lifecycle environment.

Then alongside of that we have a set of application services. So, for example, if someone wants to use database-as-a-service, we have an easy-to-use DB service so a developer can quickly add a database to their app. Behind the scenes we do a binding between that database-as-a-service at the PaaS layer, all the way down into OpenStack’s database-as-a-service offering called Trove. That way we can then offer that database-as-a-service at the development platform layer in a way that’s automatically highly available, and automatically has disaster recovery built in because we’re leveraging the Trove system underneath and providing that resilience to the database behind the scenes.

We’ll do a lot more things like that where we basically illuminate the capabilities inside OpenStack at a higher level for developers to take advantage of. For example, there’s this concept called affinity scheduling inside OpenStack where you can say, assign my VM to a high memory machine or assign these VMs to that data center because that is the only one that’s HIPAA compliant. As that grows in OpenStack, we want to light up that type of capability higher in the platform so it becomes really easy for the developer.

Also, what we use behind the scenes in our Helion development platform is Docker. Every app you build on our Helion development platform instantiates as a Docker container so you can take those Docker containers and assign them wherever you want. We think this Docker + OpenStack combination is going to very powerful.

So, back to your question, they are two different architectural layers. One is targeted at developers, and one is targeted at IT ops. They can be used independently but we’re doing a lot of work to make them better together.

When it comes to use cases for cloud, VMware is positioning its vCloud Air as a natural landing spot for ESX workloads, and Microsoft Azure is a natural spot for Hyper-V and System Center, so where do you see HP being the natural answer?

Because of my Microsoft background I can ask a company what versions of Windows Server and System Center they’re using and I’ll know right away if they’re a Microsoft loyalist or not, and for those customers, the Azure story is compelling. And AWS is definitely the default if you’re a startup and looking for the fastest onramp to getting some compute and storage resources that can scale wide. Where we win are with enterprises that have stepped all the way through the virtualization steps in the past three to four years, companies that have more than 50% of their environment virtualized. Now they’re getting a lot of pressure on being able to go faster.

So what they’re trying to do is take a first step into the cloud, but they are typically encumbered by a tremendous amount of existing IT or security requirements or other business or industry constraints. We have a customer, for example, who just did a few acquisitions, some of which have used public clouds. Their business policy doesn’t allow the use of public clouds so now they have to repatriate those resources back inside their firewall. So we deal with a lot of people who are building private clouds first.

Private cloud on their premise?

Yes. The other big sweet spot for us are service providers and telcos. And there’s a few reasons for that. One, telcos in particular are very open-source oriented. And two, many service providers and telcos are massively threatened by the public cloud vendors. So, if you are a telco or service provider in, let’s say Europe or Asia, Amazon and Google can be really threatening, not just because of their cloud businesses, but because of the whole value chain, all the way down to the device. So they want to ‘OEM’ our public cloud technology because they need to build a competitive offering to an AWS or Google in their markets.

In the enterprise, how critical are network advances such as software defined networking and network function virtualization in supporting this whole hybrid vision?

Frankly, the network is either the enabler or the bottleneck in most cloud deployments because so much of a horizontally scalable distributed system are deeply tethered to network capabilities. So when you start moving to 100 to 1,000 to 10,000 to 100,000 nodes in a system, the network architecture becomes increasingly critical. In our distro of Helion OpenStack we make sure our networking functionality is great upstream in Neutron, which is the network component inside OpenStack, but we also need to be pluggable with other SDN controllers, with VMware NSX, with our own HP SDN, etc. And down the road we’ll have to be pluggable with others that emerge because there won’t be one SDN to rule them all, even though I’m sure some vendors would love to have that control point, but it’s just not realistic.

This is one of the challenges of building commercial open-source products: you have to have as much value as possible without ripping out the flexibility that customers were originally interested in with open source, or without tainting that because it’s very easy to go too far one way or the other where it becomes a Swiss Army Knife. It’s good at a whole bunch of things but not really good at any one thing. Or it goes the other way and becomes extremely proprietary and you kind of lose the reason why you built on open source overall.

One way we’re addressing the specific networking needs for one of our customer segments, communication service providers, is through a partnership with Wind River to integrate their carrier grade technologies into Helion OpenStack. This will provide communications service providers with an open source based cloud platform to meet their demanding reliability requirements and accelerate their transition to NFV deployments. All within our open source model and keeping OpenStack API compatibility.

Are all Helion private clouds based on OpenStack or do you sell some non-OpenStack private clouds as well?

Historically we had a private cloud infrastructure-as-a-service offering called Matrix that was not open source. This was actually before I joined. There are still customers that use that, but over time our plan is to evolve that product with our Helion OpenStack distribution. We will do it in a thoughtful manner so we don’t force customers to rip and replace. But going forward we’ve made a company-wide commitment to OpenStack.

It’s a fundamental bet. We actually got asked once at a very senior meeting, “What’s Plan B if OpenStack doesn’t work out?” I said there is no Plan B. If you have a Plan B, having lived through this at Microsoft, you end up hedging, doing things to secure the option. So you have to go all in if you really want a platform to take off. So it’s a big, fundamental decision for us and a fundamental focus that we have to make OpenStack be what we need it to be for our enterprise customers. There’s not a lot of “let’s sit around and wait for it to evolve.”

There are certainly still some big challenges with OpenStack, but we have many customers who are happily running 100s nodes, many thousands of VMs, in OpenStack for a private cloud and getting great benefit today.

In terms of hypervisor support, do you guys focus on one hypervisor or support a bunch?

At every layer we need to give customers choice. So we support KVM, which is the default people use in most cases, but with this release of our Helion OpenStack we support ESX and very shortly we’ll support Hyper-V.

But at each layer we support choice. At the hardware layer, for example, we support our HP gear but have a certification test for third parties on non-HP gear, and a set of tests and benchmarks we give to third-party OEMs to validate against. We know we’re not going to sell an HP server with every software sale – that’s not reality.

Then even further up the stack we have multiple programming languages and frameworks people can choose from, from Python or Ruby or Java or .NET. That polyglot environment is important for us.

So we’re not only giving customers a choice of where to install and run their cloud, we also give them a lot of choice when it comes to the technology they can use because, at the end of the day, the VMware story is very vertical, the Red Hat story is very vertical, the Microsoft story, even though they talk a lot about open source, is really very vertical. Choice and a platform truly built on open source – that’s a differentiation for us.

If you’re pushing a high-end, enterprise-level story, why on the Helion website are you shouting about price so much? That kind of screams commodity.

As of 2014 less than 10% of enterprise IT is using cloud computing, so the growth opportunity is huge. And when you are trying to fight an early market battle for share, particularly for OpenStack oriented customers, you want to grab as much share as fast as possible.

One of the biggest advantages of a company like HP is we have all sorts of ways we can monetize. We don’t need to sell software at huge margins. We don’t need to sell a server for everything we do. We don’t need to sell services for everything. We have all kinds of ways we can make money through the broad HP. So that gives us a bunch of freedom, actually more freedom than I had at Microsoft because we can do things on every dimension to compete and aggressively grab market share.

And one tool we can use is price. So we can go undercut the other guy because our P&L isn’t solely based on software markets. We certainly compete with other OpenStack distributions like Red Hat. So one of the reasons we’re coming in at the price point we are is because we want to make it zero friction for our customer when they do that comparison of OpenStack distro A versus OpenStack distro B, at every level of comparison.

But, that said, almost everything we do is through a larger enterprise relationship. Typically when an enterprise is buying from HP they’re not making a singular decision for one piece of software or one server order or one set of services. So we talk about the big picture, what our cloud platform can do, how we indemnify our distribution of OpenStack, product capabilities, pricing, the whole thing.

This is really hard when you have a business model that is pegged to one thing like software because you end up between a rock and a hard place because you can’t easily discount below your margin line because it’s very difficult to make that up. Microsoft has a little bit more flexibility because they have such a breadth of software and they have such a breadth of offerings. For Red Hat and VMware it’s a little different because they are bound to their business model, so they have some very hard floors and ceilings in terms of what flexibility they have.

You recently acquired Eucalyptus which doesn’t have big OpenStack roots. They’re mostly about AWS integration. How do you see that fitting in?

Eucalyptus was really two things for us. It was a good collection of people who know how to build cloud software, and it was the AWS interoperability piece. I keep talking about choices, and we realize the design pattern of AWS is hugely relevant. So we needed the ability to tell customers, if you have or are interested in that design pattern, we have a way to support that.

So where we typically see the Eucalyptus demand is where a customer wants to have the ability to move an app out of AWS back to a private or managed cloud environment, or where someone says, I don’t know what’s going to happen yet in terms of going to the public cloud so I’m going to first build my private cloud apps with Eucalyptus and the AWS design pattern (basically meaning using the EC2 APIs, the S3 APIs, etc.), and building it in a way that gives me the flexibility to locate the work where I want.

What should we look for this coming year?

You’ll see us continue to build out our Helion distro of OpenStack and our Helion development platform, so you’ll see new services, new capabilities, that kind of thing. You’ll see us do a lot in the telco/service provider/NFV space.

And later in the year you’ll hear us talk a lot about a new model for enterprises that want to consume managed cloud services but don’t want to buy anything physical, don’t want to own anything anymore, that just want to consume, but in a way that matches their business realities today. We’ll be doing a lot in that space. I’m a believer that the cloud industry we have today is going to look very different in the future as the enterprise really starts adopting cloud technologies – and then all cloud vendors will shape their strategies to fit what enterprises want. So we’re trying to skate to where the puck will be and start to invent some of those new models.

You mentioned that analysts say only 10% of enterprise needs are supported by the cloud today. What’s the timeframe for change?

That’s the multi-trillion dollar question, isn’t it? But I see two enterprise patterns happening right now and this may inform the answer. One is the linear step. I’m going to move from virtualization to private cloud infrastructure-as-a-service, then I’ll try out some of this PaaS stuff to see how that really makes sense. Then I’ll see if I can run that across multiple data centers and then maybe see if a public cloud thing makes sense. So it’s kind of a linear mode.

The other pattern I hear, and this is the riskier one, is where the CIO says any new app inside my enterprise will be built to platform-as-a-service and can have zero knowledge of an operating system underneath it. What they’re trying to do is say, let’s start building in the new cloud-native model so we don’t have to worry about migrations and lift and shift and all of that.

But then there’s another question, and that is, which platform-as-a-service? At some point you’re binding to something, you’re making some commitment to some API somewhere. It may not be at the operating system level anymore. It may be higher up the stack in the middleware.

Then frequently we see customers say, we won’t move our existing resources to a cloud model. We’ll build the next project or the next deployment in a true cloud model. We’ll build that as a stand-alone system and then try to bridge back, usually through management tools, to the old. That is very common as well.


 

Cisco CCNA Training, Cisco CCNA Certification

for more info on HP Training and HP Certification and more log in to Certkingdom.com

Posted in HP | Tagged , , , , , , | Leave a comment

How to set up 802.1X client settings in Windows

802.1X provides security for wired and Wi-Fi networks

Understanding all the 802.1X client settings in Windows can certainly help during deployment and support of an 802.1X network. This is especially true when manual configuration of the settings is required, such as in a domain environment or when fine-tuning wireless roaming for latency-sensitive clients and applications, like VoIP and video.

An understanding of the client settings can certainly be beneficial for simple environments as well, where no manual configuration is required before users can login. You still may want to enable additive security measures and fine-tune other settings.

Though the exact network and 802.1X settings and interfaces vary across the different versions of Windows, most are quite similar between Windows Vista and Windows 8.1. In this article, we show and discuss those in Windows 7.

+ ALSO ON NETWORK WORLD: WHAT IS 802.1X +
Protected EAP (PEAP) Properties

Let’s start with the basic settings for Protected EAP (PEAP), the most popular 802.1X authentication method.
111714 network connection dialog

On a Network Connection’s Properties dialog window you can access the basic PEAP settings by clicking the Settings button.

Next, you move through the settings on this PEAP Properties dialog window.

Validate server certificate: When enabled, Windows will try to ensure the authentication server that the client uses is legitimate before passing on its login credentials. This server certificate validation tries to prevent man-in-the-middle attacks, where someone sets up a fake network and authentication server so they can capture your login credentials.

By default, server certificate validation is turned on and we certainly recommend keeping it enabled, but temporarily disabling it can help troubleshoot client connectivity issues.

Connect to these servers: When server certificate validation is used, here you can optionally define the server name that should match the one identified on the server’s certificate. If matching, the authentication process proceeds, otherwise it doesn’t.

Typically, Windows will automatically populate this field based upon the server certificate used and trusted the first time a user connects.

Trusted Root Certification Authorities: This is the list of certification authority (CA) certificates installed on the machine. You select which CA the server’s certificate was issued by, and authentication proceeds if it matches.

Typically, Windows will also automatically choose the CA used by the server certificate the first time a user connects.

Do not prompt user to authorize new servers or trusted certification authorities: This optional feature will automatically deny authentication to servers that don’t match the defined server name and chosen CA certificate. When this is disabled, users would be asked if they’d like to trust the new server certificate instead, which they likely won’t understand.

We recommend this additive security as well. It can help users from unknowingly connecting to a fake network and authentication server, falling victim to a man-in-the-middle attack. Unlike the two previous settings, you must manually enable this one.

The next setting is where you choose the tunneled authentication method used by PEAP. Since Secured password (EAP-MSCHAP v2) is the most popular, we’ll go through it. Clicking the Configure button shows one setting for EAP-MSCHAP v2: Automatically use my Windows logon name and password (and domain if any).
111714 geier eap mschap

This is the dialog box you see after clicking the Configure button for the EAP-MSCHAP v2 authentication method.

This should only be enabled if your Windows login credentials match those in the authentication server, for instance if the server is connected to Active Directory. After connecting to an 802.1X network for the first time, Windows should automatically set this appropriately.

Back on the PEAP Properties dialog window, under the authentication method, are four more settings:

Enable Fast Reconnect: Fast Reconnect, also referred to as EAP Session Resumption, caches the TLS session from the initial connection and uses it to simplify and shorten the TLS handshake process for re-authentication attempts. Since it helps prevent clients roaming between access points from having to do full authentication, it reduces overhead on the network and improves roaming of sensitive applications.

Fast Reconnect is usually enabled by default when a client connects to an 802.1X network that supports it, but if you push network settings to clients you may want to ensure Fast Reconnect is enabled.

Enforce Network Access Protection: When enabled, this forces the client to comply with the Network Access Protection (NAP) policies of a NAP server setup on the network. For instance, NAP can restrict connections of clients that don’t have antivirus, a firewall, the latest updates, or other health related vulnerabilities.

Disconnect if server does not present cryptobinding TLV: When manually enabled, this requires the server use cryptobinding Type-Length-Value (TLV), otherwise the client won’t proceed with authentication. For RADIUS servers that support cryptobinding TLV, it increases the security of the TLS tunnel in PEAP by combining the inner method and the outer method authentications so that attackers cannot perform man-in-the-middle attacks.

Enable Identify Privacy: When using tunneled EAP authentication (like PEAP), the username (identity) of the client is sent twice to the authentication server. First, it’s sent unencrypted, called the outer identity, and then inside an encrypted tunnel, called the inner identity. In most cases, you don’t have to use the real username on the outer identity, which prevents any eavesdroppers from discovering it. However, depending upon your authentication server you may have to include the correct domain or realm.

This setting is disabled by default and I recommend manually enabling it. After enabling identify privacy, you can type whatever you want as the username, such as “anonymous”. Alternatively, if the domain or realm is required: “anonymous@domain.com”.
Advanced 802.1X Settings

On a Network Connection’s Properties dialog window you can access advanced settings by clicking the Advanced Settings button.
111714 geier advanced 8021x

The first tab is the advanced 802.1X settings.
On the 802.1X Settings tab, you can specify the authentication mode: User, Computer, User or Computer, or Guest authentication.

User authentication will use only the credentials provided by the user, while Computer authentication uses only the computer’s credentials. Guest authentication allows connections to the network that are regulated by the restrictions and permissions set for the Guest user account.

Using the combined User or Computer authentication option allows the computer to log into the network before a user logs into Windows and then also enables the user to login with their own credentials afterward. This enables, for instance, the ability to use 802.1X within a domain environment, as the computer can connect to the network and domain controller before a user actually logs into Windows.

When User only authentication is used, you can click the Save Credentials button to input the username and password. Additionally, you can remove saved credentials by marking the Delete credentials for all users checkbox.

The second section of the 802.1X Settings tab is where you can enable and configure Single Sign On functionality. If the system and network are set up properly, using this feature eliminates the need to provide separate login credentials for Windows and 802.1X. Instead of having to input a username and password during the 802.1X authentication, it uses the Windows account credentials. Single sign-on (SSO) features save time for both users and administrators and help to create an overall more secure network.

Advanced 802.11 Settings

On the Advanced Settings dialog box you’ll see an 802.11 settings tab if WPA2 security is used. First are the Fast Roaming settings:
111714 geier advanced 80211

The second tab on the Advanced Settings window is the advanced 802.1X settings.
Enable Pairwise Master Key (PMK) Caching: This allows clients to perform a partial authentication process when roaming back to the access point the client had originally performed the full authentication on. This is typically enabled by default in Windows, with a default expiration time of 720 minutes (12 hours).

This network uses pre-authentication: When both the client and access points supports pre-authentication, you can manually enable this setting so the client doesn’t have to perform a full 802.1X authentication process when connecting or roaming to new access points on the network. This can help make the roaming process even more seamless, useful for sensitive clients and traffic, such as voice and video. Once a client authenticates via one access point, the authentication details are conveyed to the other access points. Basically it’s like doing PMK caching with all access points on the network after connecting to just one.

Enable Federal Information Processing Standard (FIPS) compliance for this network: When manually enabled, the AES encryption will be performed in a FIPS 140-2 certified mode, which is a government computer security standard. It would make Windows 7 perform the AES encryption in software, rather than relying on the wireless network adapter.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

Posted in Microsoft | Tagged , , , | Leave a comment

Hey Samsung: Not everybody has to be a platform

It’s easy to see why everybody wants to be a platform these days. Just look at Apple: By owning both the hardware and the operating system, it gets total control over what developers build on its platform — and a sizable cut of the revenues besides. In return, developers get an unmatched distribution channel directly to customers’ devices. As Apple extends to new devices, those developers get to come along.

So it’s no wonder that Samsung, eternally defining itself by its struggles with Apple, wants to be a platform, too, especially in the face of shrinking profits. On paper, it seems so simple: Samsung has the hardware business. It’s making some wearables, investing in a connected home business with the SmartThings acquisition, and getting into virtual reality.

Open some APIs, give out some SDKs, talk about “open” and host a big-time developer conference in San Francisco (as in, the Samsung Developer Conference I write this from) to make sure everybody knows how committed you are.

But what Samsung is lacking, what major platform providers have in spades, is something harder to pin down, and much harder to imitate. Apple, Salesforce, even Microsoft lately, have demonstrated that most vague, but most important notion. They have vision — a clear and present mission that drives them forward, even when that path isn’t immediately obvious.

But Samsung? Samsung has really good phones and some solid tablets and a partnership with Oculus and SmartThings and now Project Beyond, a super nifty 360-degree streaming 3D high definition camera. But in the entire two-hour keynote session this morning, attendees were treated to a rapid-fire string of previously announced non-news like the Simband open health wearable platform (now open for developer sign-ups), a demo of what’s possible with SmartThings and a reaffirmation that the company will keep investing in Samsung Knox, its enterprise workspace feature.

Other than the virtual reality stuff, and the Project Beyond camera, which are actually, really, very cool, it’s mostly a lot of the same old. The only “new” thing coming to Samsung devices is Samsung Flow, a me-too take on Apple’s cross-device Continuity features. Other than that, the company was just trying to show developers that products exist and can be built upon without offering a tremendously compelling case for why. It’s not really leadership material.

When Apple is selling watches, Google is selling Nest thermostats, and Microsoft is revamping Windows for the multi-device future, Samsung’s follow-along mentality of “just add developers” just doesn’t seem like enough, no matter how many sensors it adds to Simband.

(The company’s technical keynote takes place Thursday; maybe there’ll be something more impressive that will change my mind. But I doubt it.)

The point here is that Samsung is a hardware company, in so many ways. It’s succeeded in the first place by making devices that people actually want to use. And part of how it got there was by being part of somebody else’s ecosystem. And yeah, it must chafe those at Samsung corporate command to have Google to thank for the success of the Galaxy S line of phones. But maybe, just maybe, throwing your support behind an operating system that nobody asked for, wants, needs or supports (Tizen) wasn’t the right answer, no matter how technologically proficient it is.

And in the same way people ask whether Microsoft’s hardware business is good for Microsoft’s vision as a service provider, they have to also ask whether this whole insistence on being a software provider is good for Samsung’s business. Nobody seems excessively jazzed about developing for the Samsung-backed Tizen ecosystem in a world where Android and iOS are already pretty well standardized.

“Ecosystem” is just a fancy word for building the stuff that users, not corporations, want. Rather than controlling everything, maybe a renewed focus on being the best part of the Android ecosystem — and on making what customers actually want — would do Samsung good.


MCTS Training, MCITP Trainnig

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Posted in Tech | Tagged , | Leave a comment