Saturday, April 28, 2007

Introduction to the PC and Mobile gaming industry

This week I've decided to take a break from blogging about security, and have instead written about the PC gaming industry with a little info on mobile gaming.


The PC gaming industry is a segment of the overall video game industry, and usually refers to only the sales of the software titles produced by game designers. Unlike the console market which includes the sales of the actual hardware systems, the PC gaming industry does not because virtually any modern computer can install the software needed to have a ‘gaming’ system and it’s hard to define what percentage of computer’s are purchased for solely gaming due to the computer’s multifunctional abilities. Nevertheless, as I will explain later, hardware manufacturers such as Intel, Nvidia, Kingston, etc. are major players in the gaming industry and have led many campaigns to stimulate growth. The PC gaming industry encompasses both windows and Mac platforms, but, since Windows is the dominant platform, Mac sales are small and games are usually ported from the Windows or console platforms (see link here). The current state of the mobile gaming industry is similar to that of the PC gaming platform, just in mini form (predominantly single player, and sales of software are only considered), so for brevity I will only talk about the PC industry with an occasional discussion on the differences of mobile gaming where appropriate.

Players in the PC gaming industry

Hardware – As mentioned above, since the hardware needed to play games are not specific any manufacturer, every PC and component manufacturer [from Dell to Logitech (who makes pc game controllers)] has at least a minor stake in the PC gaming industry. Therefore to keep things short, I’ll mainly focus on two components that are the primary beneficiaries of the PC gaming industry: CPUs and Video cards.

1. CPUs – The central processor unit does as its name suggests and there are two major suppliers of the chips: Intel and AMD. Intel is the giant in the CPU market with revenue last year of $35 billion which declined from $38 billion in the previous year due to significant market share losses across all markets (Dell used to ship only Intel based systems), but more importantly lost market share in the lucrative server market to AMD who now possesses approx 21% (see link here). AMD is the underdog of the CPU market who dared to challenge Intel’s near monopoly power over the industry, and last year posted sales of $5.6 billion, but in recent years has had cash and operating losses which has marred its financial performance.

2. Video cards/GPU – as the demands of gamers push for more realistic environments it quick becomes apparent that the CPU isn’t enough, so manufacturers came up with a separate device which has graphics logic programmed into it (to draw all the polygons, smooth texture, and imitate physics). The three major players in this arena are Intel (38% MS), Nvidia (29% MS), and ATI (24% MS – see link) who recently merged with AMD (see link here). Initially, you wouldn’t think of Intel as a dominant player in the graphics industry since its products are poor performers compared to Nvidia and ATI, but the fact still remains that many computers were not meant for gaming and use Intel’s onboard graphics for daily use. AMD's merger with ATI indicates the feasting of Intel’s core markets will continue.

Software developers – while there are many small software design studios, according to Hoovers the three big developers in the entertainment and games segment of the software market are EA, Activision, and Take-Two (see industry link). These names probably sound familiar because they are the same developers who make the popular titles you see in playstation, xbox 360, and the Wii. The following sales figures are of the entire company as a whole and not exclusively for the PC/mobile gaming divisions. Since most of the games are ported over to PC/MAC/Mobile, they are mentioned here.

1. Electronic Arts – last year EA had sales of $2.9 billion which was more than the combine sales of its top two competitors. Most of EA’s top titles come from perennial sports favorites such as Madden 07, NBA 07, NHL 07, but also include popular film titles such as Lord of the Rings: battle for middle earth, Harry Potter, and the Godfather. What is common about all these popular titles is that they all require licensing fees to be paid to their respective owners (more will be talked about this in the competitive dynamics). In a move to capture the evolving mobile gaming platform EA acquired mobile game leader JAMDAT in 2006 and has formed subsidiaries such TOM Online to distribute mobile games in India, while also forming a strategic alliance with QUALCOMM’s mobile gaming division, Brew (see link to EA).

2. Activision – tracing its roots back to the Atari 2600 system and the popular title Pitfall, Activision also is a major licensee of games titles such Star Wars, Shrek, and Marvel comic’s film franchises, including: Spiderman, Fantastic Four, etc. Last year Activision posted sales of $1.4 billion (see link to Activision).

3. Take-Two – unlike the above two competitors who rely heavily on the popularity of licensed titles to support sales, Take-two and its subsidiaries 2K and Rockstar are driven by developing in house IP titles such as Grand Theft Auto, Civilization, and Rollercoaster Tycoon, but does have a small collection of licensed games. Last year Take-two posted sales of $1 billion (see link to Take-Two).

Licensors – these are the owners of IP content. In the gaming industry this content often follows whatever is in the popular media today, from the Harry Potter films (which were licensed from JK Rowlings) to popular icons such as Tony Hawk. Since these licensors are generally fragmented, I will not list any players in particular, but it is important to keep in mind that the licensing costs vary greatly and are a big concern in development costs.

Advertisers – this is just about every company that has a product to sell. In recent years companies are just beginning to invest in advertising within video games because, according to Hoovers, ¾ heads of households play video games and more than 40% of gamers are in the 18-49 age bracket. These are key demographic audiences who make purchasing decisions and are generally the most affluent.

Competitive Dynamics in PC gaming

According this NY times article, the PC gaming industry had US sales of $970 million in 2006, $953 million in 2005, and $1.1 billion in 2004. When you subtract out the revenues generated from online games such as second life, WoW, etc., what is left is a PC gaming industry that is in decline or making marginal growth. This is in stark contrast to the overall gaming industry, which PWC (here) believes will grow at a compound annual rate of 16.5% at least until 2009. What could cause such poor performance in the PC gaming industry? Well, other than substitutes such as consoles and online gaming which is related to PC gaming, three problems that I’m going to talk about are piracy, internal competition, and the lack of a gaming standard.

1. Piracy – Better to be a pirate than to join the Navy?

Let’s face it… any data that touches a computer is going to be pirated. Given that the PC is a multifunctional device that can be programmed to do whatever you want, it should come as no surprise that hackers have devised programs to bypass and copy any game. This is a little bit harder to do with console systems because at every generation hackers must reinvent the wheel, but over time every system has been or will be cracked to play pirated games. (here is a site that has a chip that has cracked the Nintendo Wii already.) An estimate from this BBC article states that global piracy cost the gaming industry $4 billion in 2004, with lost sales of 50% in the US, and over 90% in Eastern Europe, Asia, and South America! In that same article Doom 3 creator, Todd Hollendshead, claims that by using pirated games we are only poisoning the well from which we drink from as the PC game industry falls further in to decline. Todd goes on to say that the industry is always in a constant ‘One upmanship’ battle with hackers. Pushing the envelop on this battle, Sony was sued for its DRM rootkit and controversial software protection company, StarForce, offered a $10,000 prize for anyone who could prove that its software caused hardware malfunctions (although its known to make your pirated software crash).

2. Competition amongst Developers

Currently amongst the three major developers mentioned above there is a mad rush to sign up license agreements for what will be a future franchise hit. For example, Activision has obtained exclusive rights over the popular Marvel film titles and has plans already to release the Spiderman 3 game at the same time as the movie. While licensed games, such as EA’s sports series, can be profitable if done correctly, I’m reminded of the many licensed failures such as Shaq-Fu and E.T the extraterrestrial (which was a catalyst of the gaming industry’s early collapse). Furthermore, EA CEO, Lawrence Probst, points out, in an interview with the Wall Street Journal, that increasing development and licensing costs are hurting margins. Nevertheless, EA and many other licensees are locked-in to these titles because they continually bring in a sizable revenue base (40% for EA). To counteract this effect, developers are looking to establish wholly-owned IP titles such as EA’s upcoming title Spore (more on this later).

3. Lack of a gaming standard

One of the greatest strengths and weaknesses of the PC platform (as well as the mobile phone) is that improvements in hardware performance move along a continuous function, while console based platforms move in step functions. What I mean by that is if you were to graph performance vs. time you would see that the PC market exponentially curve upwards over time while the console systems take leaps forward based on the generations of gaming systems. Why does this happen you may ask? If you remember the discussion at the beginning of our course, Moore’s law is at play here, but not just in CPU’s. The greater density of transistors applies to all components of the PC including RAM, video cards, motherboards, etc., so when the market is looked at on the whole and one realizes that PCs are made from combinations of different products from different manufacturers, the performance of ‘latest’ PCs smoothly increases over time. If you want to see real data representing this trend check out Tom's Hardware or futuremark. In both of those links you can compare various hardware configurations across different benchmark tests to see this exponential trend.

So how does this affect the gaming industry? Well, let’s say that you bought the game in the picture below, Need for Speed: Carbon, and you want to turn on all the graphics so you can see the car shine in the sun, but after installing the game your computer crashes, or worse yet, you find out that your hardware is incompatible. Nothing is more annoying than not being able to play your new video game, and since piracy is a big issue, the retailer won’t accept refunds on any software. So, in order to stay up on the gaming curve you have to upgrade something in your system, only to find out two years later that your hardware is not up to par again. Hardware makers realize that games and graphics sell more components which is why AMD merged with ATI and Intel can now be seen sleeping with ATI’s archrival Nvidia (as seen below):

Intel and Nvidia: unlikely bedfellows?

Rather than dealing with all this, some consumers are moving toward consoles because there the game is guaranteed to work and are somewhat shielded from the rapid product lifecycles in the PC world.

Business Models & Revenue Models

The primary business model of the PC gaming industry is in the development and delivery of actual games. As I mentioned earlier, one of the current trends that affects development is the licensing of movie titles and famous names which has become increasingly more expensive over the last few years. Secondly, the delivery of PC games has been marred by rampant piracy. Given these two views of the industry its no wonder the PC gaming industry has not done well since its high in 2004. Rather than boring you with how games are currently made and sold I’m going to talk about new sources of revenue, how the PC gaming industry is capturing lost revenue from piracy through a new form of distribution, and an opportunity in gaming that is under-explored.

New Source of Revenue – Advertising

As mentioned in the introduction section gaming is just beginning to see advertisements come into the picture. In gaming advertising comes in three forms: billboards, featured products (both are common), and where the game is an advertisement.

1. Billboards – similar to real world billboards, game developers have various locations throughout the game where advertisers can bid for (if it’s a standalone game) or rent (if it’s an online game) a given amount of space to place their logos, catch phrases, or commercials. In one of the games I’m currently playing, Need for Speed: Carbon you can see from my screen capture the Autozone advertisement.

You'd never buy McLaren parts at Autozone, but then again McLaren owners don't play need to play this game.

2. Featured products – what is meant by this is that real world products could be modeled after and used in the game. This turns the licensing market on its head: while developers had to pay licensors for the rights to a name, other licensors will have to pay to get into the game. For example, in Need for Speed: Carbon, now that I beat the game one of the features I find myself spending hours messing around with is the customization tool. In the early days of computing these tools were reserved for real designers and engineers, but today anyone can make something worthy of a museum. In the game you get to own all kinds of cars that you’d never dream of purchasing, and you get to do things that you’d never dream of trying (ie: going off ramps, crashing through walls, and driving faster than the Bugatti Veyron). Since computers are fast and have sophisticated tools to mimic realism today, why not customize your ride to something you’d never do or afford in reality? With the customization editor I can put on identically rendered bodykits, rims, hoods, etc. with real-life manufacturer’s modeled parts. Although it can’t be done in this game, designers can in the future add special in game benefits if a featured product is used. This can be used to bring some rarity and prestige to luxury goods. (Envision this statement: “After spending 3 days of playing, I finally saved up enough money to wrap my Ferrari in carbon fiber which gives me +5 happiness and +10 sex appeal!” ) Ofcourse, if manufacturer’s want their products in top games then they have to pay up.

Don't like the stock rims on my McLaren? Perhaps the BBS GT's look nice...

3. the game is the advertisement – an example of this is a free PC game called America’s Army. Given Dick Cheney’s shooting record and the low morale in and outside of the military, this game was created to stimulate interest in the virtues of the American military, in hopes that not people would join the armed forces. If the message is strong enough, or developers are able to integrate advertisers well into the game, why not have them subsidize the game or give it away for free in hopes of getting a return on your investment in alternative forms?

New Distribution – Online downloads

With piracy running rampant in the PC world, it is very difficult for developers to determine who actually paid for their software. This becomes particularly important when you offer customer support or the ability to play through online servers. The traditional boxed game with CD-key offers little assurance of this because it is a static defense. Within days after the release of any PC game, cracks that bypass the authentication process and key generators are available on the web. The developers at Valve have overcome these problems with an online distribution system called Steam. After installing Steam and logging into the server, the user can pay for and download from a catalog of currently 156 popular and classic titles thanks to a strategic alliance with other top PC developers, including Activision. The user also benefits from this system because the Steam application automatically keeps your games up-to-date on the latest patches. Since users login and Steam keeps games updates, this opens up new revenue streams for targeted in-game advertisements. This would also bring in revenue streams from smaller game studios who don’t have the capital to build or haven't established the relationships to manufacture games, or are creatively deterred by the rampant piracy in the world. Although Steam has many problems to work out (for example: on the first day of Half-life 2’s release they were not able to keep up with user demand), if all the PC game developers move to this model then piracy may become a problem of the past in nations with an internet connection.

Under-explored Opportunity - Education

Yea, you’ve heard of this before with children’s games, but hear me out first. Today the strategy games in PC gaming is infinitely better than console games. Why? Tools in both hardware and software. If you’ve ever watched a good Warcraft 3 player or played counterstrike, you’d know that you need a massive amount of programmed shortcut keys and both hands (one on the mouse and one on the keyboard) to keep up. This is my hypothesis on why the top 10 games in the PC world are mostly strategy based (Civilization 4, Age of Empires 3, etc.) and why the top 10 games in the console market (Madden 07, Fight Night Round 3, etc.) are sports based, and why top hits in the console market don’t make it to the top of the PC market and vice-versa.

The primary reason why PC games and education go hand in hand is the fact there is established infrastructure: every school has at least one. In my school district (and I’m sure many others) there is an initiative to get a computer in every classroom. (Try convincing your principal of doing that with consoles…) Internet doesn’t exist in a lot of classes (so online games are out of the contention) because of infrastructure costs, but at least a computer is there. Problem is, outside of academic software not that many good PC education titles exist to fully utilize that computer or keep the student’s attention.

So with users able to dynamically process and interact with the PC why not bring that into education? The real world does not operate in a vacuum, but we are taught to learn in fixed domains, so why not give kids a chance integrate them all. Take, for example, the game Civilization. This series is a top PC hit, and takes the user from 4000 B.C. to modern times. Throughout the game the user must balance economics with politics, culture, technology, military, and the environment. Furthermore it’s turn-based and fixed on a map, so it’s like playing chess on a different level. If this and other strategy games had a little more of an educational twist inserted, they’d make great learning and money making tools.

Future Trends: Kung Fu in the PC gaming industry


After investing time and money in beating a game, gamers are happy to know that they are the best player in the house…but what about being the best in the world? As you’re probably know, the future of PC gaming is being wrapped more and more into online play and the network opportunities there. Furthermore, it doesn’t matter what console or system you are playing on with digital convergence taking place; it’s all packets going to a server anyways. Microsoft realized this and last month decided to integrate its Xbox Live subscriber base to a new Vista platform called Games for Windows – Live (link). Pure money making genius Mr. Gates! Instead of going through the growing pains of building a new network standard, leverage an existing one to make both platforms stronger. What a novel idea… why not do this across all gaming platforms whether PS3, Xbox 360, Wii, or PC? Since my colleagues are covering online games I’m not going to say anything more about them. Instead I’m going to focus on how this interaction with the gaming industry is changing the PC world.

Massively single player

With all things moving online, does that mean that the PC industry is doomed? Not quite. As I mentioned before, one of the major problems is the lack of a gaming standard due to different hardware configurations. Nevertheless, over time the PC will always outpace the consoles in performance and makes it an exceptionally good stand alone device that allows users to think strategically in a dynamically processed world. Take for example the upcoming game Spore from EA. Having missed the online frenzy with The Sims 2, revolutionary game designer Will Wright plans the game to use something called “procedural animation” which models behaviors and physics, then allows the engine to determine how objects should move and act in response to stimuli. This is unlike your traditional AI which operates in linear and often “stupid” ways (think about some shooter games today: the enemy comes at you only in a frontal assault and often follows the same path). Wright couldn’t have put it better saying, “Instead of making fixed definitive things that we put out into the world, I think we’ve both decided that it’s much more interesting to make things that even we can’t predict.” This idea is appropriate given that the game is about evolution from the primordial ooze to the colonization of new planets. Furthermore, while this game does have an online component where people can share beings, you won’t need a broadband connection, or an internet connection at all because an entire spore universe can be condensed into a file about 80 kilobytes and therefore transferred on (if you still have one...) a floppy! (This reminds me of the ending to Men In Black and how infinitesimal we all are.) By allowing the procedural engine to take over and the users to create and introduce new beings, this allows for unique replay-ability all the time, everytime. Since it is easier to get a computer into remote and developing locations of the world than internet connection, Spore and other PC titles still have relevance and growth potential in these markets. Spore also represents the trend toward in-house developed titles rather than relying on expensive licensing.

Haptics

The history of gaming thus far has focused on visual sensation: being able to draw things that look and imitate reality or fantasy. But what about the other senses? Enter the Novint Falcon which aims to imitate movement and the sense of touch. Recent developments in computing power have made the Falcon technically feasible and with PCs at the fore of the performance curve, don’t expect to this technology any time soon on your console. In gaming this device can be used to feel the weight of shooting a basketball, the recoil of firing a gun, or the force of swinging a bat. Aside from pure gaming though, this device can be used for educational purposes such as in surgery. In the operating room knowing where to make the cuts is useless if you don’t know with what strength to apply them. I can easily see this device training new surgeons, but also teaching wannabe’s what goes on after the nitrous oxide. (Furthermore, why not have a surgeon on the other side of the world perform surgery in a location with a robotic arm? This could reduce the need for a specialized surgeon in every hospital and save many lives in “risky” and poor countries.) Another application of this device is perhaps in art and specifically painting. If you’ve ever tried drawing with a mouse you know how hard it is to get the right shape and color intensity down. Haptic devices can be used to imitate the feel of the brush tip on the surface of a canvas, while offering laser fine precision.

In Sum...

Through my analysis, I hoped to show you that the PC gaming industry is reacting like kung fu: what was once perceived as a weakness has been turned into a strength. While PCs lack the standardization that consoles offer, they still are excellent platforms for cutting edge performance (massively single player games) and the latest technological advancements (haptics). Nevertheless, the industry is making strides to combat piracy in online distribution and has new opportunities for revenue utilizing this business model. Lastly, the domains of gaming and education are currently under-explored and as convergence continues to take place the two can intertwine to make an awesome gaming experience and a better world.

Saturday, April 21, 2007

Digital Convergence: Putting all your eggs in one basket?

While my daughter was collecting eggs this Easter, and our class was just beginning to talk about digital convergence, it struck me to investigate how one aspect of this global phenomenon was affecting the security world. Traditionally the enterprise network looks like the topology in the picture above. After reading an article from Gartner on the network firewall market, they pointed out that, “Moore’s Law is finally beginning to apply to…network security”, which has caused prices to drop (but not commoditize) and has allowed firms to target emergent SMB (small to medium sized business) with low cost all-in-one solutions. Here I will investigate what are the targets for network security convergence and the implications on business.

The eggs
There are many target devices for convergence in network security infrastructure, so for brevity I will describe 5 of the most common:

Router – these devices direct traffic that is being sent through the network and when a packet needs to leave the network it can calculate the shortest/economical path to get the packet there.

Firewall – analogous to buildings, these devices restrict access so as to act as an impervious wall whereby only authorized packets may pass. The trick here is how one defines what is authorized and how deep into the packet is one willing to inspect (trade-off with speed and cost).

IDS/IPS – Intrusion Detection/Prevention Systems – these devices have threshold ‘sensors’ and behavioral logic regarding what is considered an intrusion. For example, multiple failed login attempts may be an indication of a brute-force password cracker trying to gain access; therefore a red flag should be thrown.

VPN – virtual private network – these devices provide perform the encryption and private ‘tunneling’ protocol which enables secure connections between networks.

Gateway – these devices serve to limit and translate data send across the network so that the message is in an understandable format. For example, you use a proprietary email system called Mail-e and you want to send a message to another person who uses Microsoft exchange. When sent directly to each other the messages are unreadable, so the Mail-e gateway can be used to convert the messages to its readable format and also ensures that only email of a certain type may enter and pass through its ‘gates’.

The basket
In the Gartner report that I mentioned earlier they identified a few ‘leaders’ in the market (Checkpoint, Cisco, and Juniper), so I compared their SMB product features. What I found was every permutation of the above 5 ‘eggs’ in different product lines, but I want to focus on one that I found from Juniper Networks. The product name is the secure services gateway (SSG) 140, and it can be configured to combine all 5 ‘eggs’ (link to product).

Why this is a bad idea for SMB
The statement that “Moore’s Law is finally applying to network security” indicates that since processors are getting faster due to greater density, you no longer require separate machines to split up the processing load, and can now combine these tasks under a unified device. Although lower cost and more convenient this convergence brings to mind 3 problems from the loss of modularity:

  1. Single point of failure (SPF) – As I mentioned to Peony in one of her recent blogs, the more we converge appliances, the greater the cost of downtime, and also the greater the probability of failure. Combining all these functionalities into a device and relying on that single device is contradictory to the goal of security which is to mitigate risk. In practice, organizations overcome SPF through redundancy in parallel or in series, but this kind of configuration is still susceptible to the following two problems.
  2. Reduced ability to combine with physical and administrative defenses – assuming that this monolithic device can meet all your technical security needs, what’s to stop a malicious insider employee from simply unplugging or stealing it? In high security environments, these devices are usually physically separated by being locked up in different nearby rooms and each room requiring some kind of authentication whether that is a simple sign-in at the front desk or the approval of two supervisors through biometric scans.
  3. Independence of audit trail – after a good hacker gets access to your systems they typically cover they’re tracks by clearing the logs. In the traditional model where the devices are separated each network appliance contains their own independent logs and each layer of defense is progressively harder to break through. Under a unified appliance a hacker can clear these logs at one fell swoop.

Implications for Business
As usual, I’ve got to give my disclaimer: I’m not trying to say that these devices shouldn’t be purchased, but one should take into account their strengths as well as their vulnerabilities before doing so. These devices were built to target the SMB which is generally low risk and low budget, so they don’t need a multi-tier network. Nevertheless, just because you are small doesn’t mean you are low risk (ex: local patient clinics), and security begins with a critical assessment of your information.


Despite all this, internally businesses are always being pressured to cut costs, and, at the surface, these devices seem ideal. Since security is a cost center in most organizations, that pressure is compounded by those who don’t understand security’s true value and the risks their actions entail. For this reason it is important for you to not fall in to a similar trap, and appropriately identify the needs of your organization now and in the near future.

The Gartner article that I mention above doesn’t have a direct link since it’s a subscription service, so here is the citation:

Young, Greg. “Magic Quadrant for Enterprise Firewalls, 1H06.” Gartner Group. ID Number: G00141050. 5 June 2006. Accessed 15 April 2007.

Wednesday, April 4, 2007

404 Error: Service Denied


Since this week in class we are talking about software as a service and web-services, I think it’s appropriate for me to talk about the security implications and how they will affect business. To recap, the logic is that businesses spend a lot of money on redundant services that are common to everyone, so rather than paying large sums of money developing and maintaining systems, why not outsource it to a web provider who has a competency in the service you need while gaining the flexibility and omnipresence of the web? The most important threat to Web-services is the ability to halt the services it hopes to provide. What I’m talking about is referred to as DoS in the security world (also DDoS if it’s distributed).

What is DoS?

No, I’m not talking about the software that turned a computer nerd Harvard drop-out into a billionaire… it stands for Denial-of-Service. There are many forms of Denial-of-Service attacks, so for brevity I’ll only talk about one form of it, so you can understand how it works. Much of the web is about conventions and handshakes, for example when you want to connect to Google your computer sends out a packet of data that in simplified terms says, “Hey Google, I want to connect to you!” Google’s servers receive the message and the servers send a packet back saying, “Ok, I see that you want to connect to me, so let me allocate the resources, so you can download my webpage.” Your computer responds by saying, “Ok, since I know you are there, I’ll allocate the resources to receive.” The connection is then established and data can flow. But what if, after the second step we don’t respond, what does Google do? Having already allocated the resources to communicate, Google waits and periodically says, “I’m ready to send, are you there... are you there?” (For an excellent but fully technical explanation see here). The resources Google has allocated for one connection is miniscule, but now string along thousands or millions of computers that all yell at once and Google will come to its knees. This is what is known as a SYN flooding. According to this article the nation with the most zombie servers (23%) in the world is China, but guess which nation is controlling most (40%) of the world’s zombies? You guessed it: the USA.

Don’t piss off a good hacker … or run a super sale?

So how does Google defend? Well as my friend Peter points out here, one part of the equation is that big companies such as banks and Google are adopting the latest core processors and grid computing to expand their resource capacity. It’s just a numbers game… if enough hackers get together they can still bring any website down. Playing the devil’s advocate, let’s say Google builds up massive defenses, so they say to hackers “anything you can throw at me I can handle.” Google and many other websites are still vulnerable because hackers also go after the DNS servers. Domain name servers are the servers which translate the URLs you type into numeric Web addresses that link surfers to company sites (You can be proud to know that USC’s ISI is a player in the IANA). One notable DNS attack in 2004 brought down Apple, Google, Microsoft, Yahoo, and many more all at once! (see article here).

Hackers get all the bad rap for DoS, but you can bring a website down too. On Black Friday last year, Amazon offered the Xbox 360 for half of retail price. Users from around the world flocked to Amazon in hopes of getting in on this super deal, but ended up with white screens as this intense legitimate traffic brought the website down for 10 minutes (see article here).

Implications for Business

The scenario involving Amazon was not the first time the website was brought down by excessive users, and it won’t be the last. Even if we discount the hackers, web services are still prone to DoS in the same way. For example, let’s say it’s Dec. 31 (the end of the fiscal year) and the web server handling your outsourced accounting functions is being bombarded by other companies who have the same fiscal year end date. Frustrated because a report needs to be presented to management tomorrow, you (along with all the other companies who use this service) sit refreshing your pages hoping to get through. Therefore, unless the conventions which govern the web are redesigned, the Web and all Web-services will always be susceptible to DoS. This leads me to two reasons why all of your company’s IT will not be moving to the Web:

1. Control – When you keep your IT in-house you can dictate what happens on those systems and can shield yourself from the web by not having a physical connection.

2. Security – with laws such as HIPAA and the steep penalties for leaked data, sensitive information such as patient records won’t be moving to the web. Laws aside, what if your web services host becomes compromised due to social engineering or a curious web host wants to know if he/she should liquidate their stock based on your financials? Are you going to let your company and your information be subject to risks outside your control?

Don’t get me wrong, web services are great for all sizes of business, but the annoying guys in IT will always be around (sad isn't it... although there may be fewer of them, but being paid alot more). Although there is solid economic logic to support Web-services, there are many more factors and reasons why a company may not want to move its systems to the web, and I’m tired of writing, so let’s discuss them!

Monday, April 2, 2007

You Are the Weakest Link

This week my focus turns to you, the individual users of computer systems, and how you are the weakest link in the security chain. The two major areas I’m going to examine are social engineering and passwords (if you are familiar with the two then skip down), then I’m going to take aspects from the two and raise a hypothetical scenario that you may relate to.

Social Engineering – This is where people are manipulated in order to gain access. Often times this is the easiest way to gain access as no technical knowledge is needed. Two examples are: 1. email solicitations from a Nigerian royal who offers you a share of millions in exchange for your help transferring the funds (phishing) and 2) when a person leaves a disk containing a Trojan labeled “Company X Salaries” outside an office and a curious employee opens it only to unleash hell on the network (road apple). There are many types of social engineering, and I’m sure you’ve heard many stories, so I won’t dwell on them here, but keep in mind that social engineering feeds off of ignorance (lack of knowledge) and/or careless behavior.

Passwords – Let’s face it… people forget passwords and want something easy to remember, so they choose a word/name or reuse passwords for multiple sites (is this you? Keep this in mind for later). Brute forcing tools such as John the Ripper, or Hydra are very effective in obtaining passwords since they are attacking a static defense. Given a large dictionary file and the cryptographic hash (in Windows it’s the SAM file in system32/Config/), it is only a matter of time before John cracks your password. Obviously, if your password is in the dictionary then it’ll take john only seconds, but John can also be programmed to try every permutation of combinations including special characters, upper and lowercase, and numeric values! This is why sites, like myMarshall, requires you to have a complex password and have it changed every 6 months.

The following example was inspired our recent discussion of Ethernet in class and my observance of a student checking Facebook.

While discussing how Ethernet works in class, one security implication was not elaborated on: After data is broken into packets, all the packets are broadcasted to all users on the same subnetwork (read about it here). When a computer receives packets that are not intended for it, the packets are simply ignored. But what if we could capture those packets and read them? Enter the world of network traffic analyzers, such as WireShark which does just that. So how does Facebook fit in to all this? Well, when you login to facebook it broadcasts to the network your login and password because it is not a secure site (no encryption, and even if you are on wireless WEP or WPA, those standards can be broken). So what does a smart hacker do with your password? Find out who you are, which other services you use (credit, banking, etc.), and cash-in.

Implications for Business
The previous example made a lot of assumptions that may never happen, so am I saying never to connect to Facebook on campus? No, I’m leaving that judgment up to you, but imagine all the other hot spots and web sites you connected to that are not secure (SSL or equivalent), and all the personal information that is trusted in the network. It’s also important to keep in mind that people are an essential pillar to technology, and user awareness training can help prevent many problems, including the scenario above from playing out. Secondly, there are a lot of free tools that anyone can use, and it takes only takes a few actions to compromise an entire organization, so care should be taken in selecting your people. Lastly, it is important for you to familiarize yourself with how technology works because too often we use technology without thinking and are surprised when bad things happen. This goes back to the unknowing trust and risk (which I mentioned in my first post) that people expose themselves to and are present in technology, but don’t realize.

Thursday, March 15, 2007

Patches in Business

This week I’m looking at patch management, which is one of the most important cornerstones to computer security. My interest was sparked by an article based on a study that found that computers hooked up to the internet have a hacker attack run against them about every 39 seconds (see here). While the bulk of those attacks were password related (I may blog about this later), some of them included vulnerability attacks (which is the lower hanging fruit when it comes to small businesses and its relevance to you).

The need for patch management arises from fact that there is no such thing as ‘perfectly written code’, implying that software is distributed with bugs (or imperfections/problems) or worse, vulnerabilities, which are known or not yet known to the writers. With many programs containing thousands and even millions of lines of code, the complexity of the code is often hard to grasp. This problem is compounded by the fact that some code, such as that offered by Microsoft, is proprietary and closed source (which contributes to lock-in of the buyer). Of course, vulnerabilities vary in their impact, and depending on who discovers the vulnerability there are generally 2 significant paths:

1. The owners of the code realize the vulnerability, notify the public and assign it a risk level (subjective), then patch the code accordingly

2. Someone else discovers the vulnerability and either they report it to the owners for correction or use it as ammunition for exploitation

From 0-day to infinity

The term 0-day refers to the number of days between public advisory and release of an exploit. In the case of Microsoft, every 2nd Tuesday of each month (referred to as Patch Tuesday) the folks in Redmond release a set of patches to fix the problems in its software (whether discovered by them or reported from another party). Through reverse engineering, code is then written to exploit the ‘critical’ vulnerabilities (some of which can grant root or administrator privileges), and are released the next day for sale (termed ‘Exploit Wednesday’). Apparently there is a sizeable black market for these 0-day exploits (some commanding prices upwards of $5,000 a pop; see pic in blog here) as hackers and legit security experts try to get a gain an advantage on companies scrambling to push the patch onto all of their systems.

Eventually as more systems become patched and you get further from 0-day, the exploit code or variant is incorporated into exploit suites such as Metasploit (www.metasploit.com) where beginners like you and I can use them by typing simple prompts, without understanding the underlying code, and best of all: its free and open source. A more advanced, but costly tool is Core Impact ($25k/yr per license!). Like all things, there are some people that are late-movers or never adopt the patches to their systems and leave themselves exposed. Some argue that Microsoft purposely releases vulnerable software and uses the update system to deter piracy, as only those who have authentic windows keys can obtain updates. With many people relying on pirated Windows OSs, it’s easy for a ‘wannabe hacker’ to do a lot of damage for many years to come.

Why businesses are slow or don’t patch ‘critical’ vulnerabilities

With the threat of 0-day exploits it is imperative for businesses to apply patches as soon as possible because doing so will close their window of exposure. But as I mentioned before, each vulnerability varies in its impact to the organization, and as a result some businesses choose not to patch right away. Why not? First off, some patches are non-critical meaning they have no security impact. For example, updating Windows Media Player to include new feature is considered an optional update, and for brevity I won’t consider these here. Three major reasons why businesses don’t patch critical vulnerabilities is 1) that the business doesn’t deem the threat as a significant risk to the business, 2) that there’s a lack of funds or resources necessary to deploy the updates, or 3) there are other impeding organizational considerations such as lost revenue, politics, and bureaucracy.

Lets say for example, there is a vulnerability whereby running a script a 3rd party user can obtain administrator access to a web server through FTP (file transfer protocol). Once discovered, Microsoft would immediately assign this as a critical update because of the potential impact. But what if a business doesn’t have a web server and all its systems are configured to block FTP packets? Here, the critical update is not necessary to the business and managers may never patch the vulnerability or leave it as a last priority. On the other hand, lets assume a company has hundreds of computers worldwide, and the cost to manually patch all the systems is $1 million (IT administrator’s time doesn’t come cheap). If the company can afford the cost, manual updating hundreds of systems will still take weeks, so some companies have purchased automated patch management software (which is an in-demand IT solution that is often bundled with security consulting services; ex: patchlink.com), but even with these solutions businesses are slow to apply patches because doing so translates into downtime and potentially lost revenue or productivity.

Implications for Business

When it comes to vulnerabilities in software that has a wide user base, such as Microsoft Windows products, all users are affected, but businesses are particularly targeted because there is a greater potential reward. As with all business, security is also about risk, and one of the first steps is to know the value of what you are securing (the NSA has a methodology called IAM; see here for more info) Obviously, if something is extremely valuable like the systems that store patient records at a hospital, then those are the systems that you want to spend more of your money securing. There is a trade-off between cost and the confidence in security, and because security is something intangible many managers are initially reluctant to dedicate the necessary money or the resources. As the next generation manager, it is important for you to know what information is critical to the business, what threats are out there, and what solutions you can apply to mediate the risk. Even with a limited budget, patch management is an essential pillar to an information security plan as tools like metasploit can be easily obtained and utilized leaving your business at the mercy of a hacker.

And if get anything from this post, by now you should know to keep your software up to date, so do it now! :)

Thursday, March 8, 2007

Hello World!

Hello Everyone and Welcome to my Blog!

Here I plan to write about the need for information security no matter what size of the organization. But before I go any further you’re probably wondering who I am and why you should read anything written by me. Well, currently I’m a senior at the Marshall School of Business with a concentration in information systems. During my summer internship last year, I worked as technology risk consultant and got to see the abilities of a business intelligence tool that analyzed and categorized threats as information passed through the network. I’ve also done cross disciplinary work at Viterbi School of Engineering where I took courses on web security, forensics, and am currently working with the Director of ITP on how to plan and implement an information security plan from the ground up, across an enterprise.

My interests with information security stem from childhood where until recently the focus of my academic pursuit was on construction of information system: whether that is a computer from components or using Oracle to design a relational database. This focus was great for a beginner and I took the perspective, like much of the world at the time, that as long as it worked we could put it in the closet and lock it up. Now that I am proficient at building, the focus has shifted toward breaking down and filling in the pieces that threaten to debase the edifice. (For example: how to gain unauthorized access to a system or integrating key policies and procedures in the security plan to give managers the authority to monitor and fire employees.)

I also have to admit that its pretty cool being able to see what people do and realizing the level of trust and risk they expose themselves to unknowingly. Of course, doing so responsibly…

This unknowing trust and risk leads to debacles in organizations all around us. For example, recently an unnamed school on the other side of town had their systems compromised for nearly a year which contained almost a million student, alumni, and faculty personal information including SSNs (link to Washington Post). Even the school I’m attending (USC), had its admission application database hacked using a common hacking technique called SQL injection. Allegedly, the hacker acted as a white hat, and notified USC officials, but the school still initiated a complaint with the FBI, and spent $140,000 notifying applicants of the breach (link to Daily Trojan). Furthermore, something as simple as a stolen laptop threatened millions of veterans’ and active military service members’ identities last May. Yet this is only the tip of the iceberg, as thousands of other stories don’t get media attention, and even more organizations are ticking time bombs who fail to see the value of security (particularly small to medium sized businesses).

As we become a society increasingly reliant on computers, the problems mentioned above and the need for security will only become more pronounced. In the future I plan to blog about ‘hacking attacks’ such as denial of service and SQL injection, privacy and protection measures such as business policy contracts and passwords, and perhaps the myth of the ‘Secure Mac’. Check back here often, as I continue to examine the world of information security: the business it creates, the damage that can be done, and how it all relates to you.