Monday, January 27, 2014

The best thing about BSIMM - it isn't a standard

The Building Security in Maturity Model (BSIMM) is not a standard - and that is the best part about it. Rather, BSIMM is a reflection of the secure software development practices deployed at 67 of the largest software development shops in the world, spanning software companies, financial institutions, healthcare companies, telecoms, and others. And this is what makes BSIMM great and differentiates it from standards. The BSIMM practices have been forged in the fires of threat and vulnerability and business profitability as opposed to an abstract entity from which too many security standards emanate.

In my limited experience in using BSIMM across several organizations, BSIMM is well received. Being a reflection of security practices of software development industry leaders, BSIMM changes the tone of the assessment from being viewed as an audit (somewhat adversarial) to a benchmarking study that will benefit the organization being reviewed. Companies are curious to know how they stand up against others.

BSIMM also fits well with a variety of of development methodologies and contexts, whether it be waterfall, iterative, or agile. BSIMM is also adaptable, being divided into three maturity tiers. For some organizations, it may make sense to just do tier 1 practices, while others it might make sense to do all of tier 1 and 2 and select tier three practices.

All of us in the security industry owe a big 'thank you' to the BSIMM authors - Sammy Migues, Gary McGraw, and the others. I'm looking forward to the development of observation-based security practices for other domains.

Sunday, January 26, 2014

Russia's computer crime laws are a big problem for the rest of the world

In Russia, financial computer crime is the perfect criminal enterprise. While the Russian Criminal Code makes it illegal to hack into computer systems, the punishment is ridiculously weak. If prosecuted, the maximum prison sentence is 6 months if the perpetrator returns the stolen money to the victim. In comparison to financial-motivated crimes, if your crime is categorized as 'hooliganism', then you can receive a sentence up to two years. So, steal $5 million and get 6 months prison. Hack in to a twitter account and get 2 years.

Under this legal code one could build a successful and sustainable criminal enterprise. Let's conservatively estimate that there is a 25% chance of being caught perpetrating a financial hack (it is probably much lower than this). The criminal pulls off his first hack without getting caught and banks $100,000. The criminal pulls off a second successful hack and banks another $100,000. In the third, the criminal is caught stealing another $100,000. He pays the $100,000 with proceeds from his first two crimes and goes to jail for six months. Out of prison, he pulls off two more successful hacks and again gets caught in the third. At a minimum, he is up $200,000. With the capital he has built up, he invests in people and infrastructure to expand the enterprise, banking proceeds and paying restitution and time as necessary.

Viktor Pleshchuk, one of the perpetrators of the RBS World Pay hack, was pinned as being responsible for $318,000 of the $10 million stolen from RBS. He reimbursed RBS for $318,000 and served a six month sentence.

Time for an update of the Russian Criminal Code.,russian-cracker-helps-hoist-10m-fined-310k.aspx

Friday, January 24, 2014

RBS World Pay Compromise - One of the more sophisticated hacks of our time

The RBS World Pay compromise is a great example for applying to the Criminal Cost-Benefit Model. If you aren't familiar with this attack, it is worth studying. It shows you just how far a criminal with computer hacking skills is willing to go to steal a few million bucks.

On November 8, 2008, an army of cashers armed with compromised pre-paid payroll cards descended on ATMs located in over 280 cities around the world and withdrew $9.5 million in cash in a twelve-hour period. The cashers kept their commission, 30-50% of the take, and wired the remainder to the scheme masterminds. The four leaders of the heist had previously broken in to the Royal Bank of Scotland WorldPay network and stolen data for 44 pre-paid payroll cards, cracked the payroll card PIN encryption, raised the funds available on each account up to as high as $500,000, and changed the daily ATM withdraw limit allowed. The timing of the change of the funds available and daily withdraw limits was done just before the cashers were to begin their global withdraw. During the heist the hackers monitored the withdraw transactions remotely from the RBS WorldPay systems and, once the heist was finished, they attempted to cover their tracks on the RBS network.[1]

This was a well-thought out attack – perhaps one of the most sophisticated financial system hacks to date. I think these guys were well aware of the risks as they planned out this attack.
·      Monetary Benefit (Mb) – Very High. Assuming that the hackers collected 50% of the $9.5 million, they each stood to make $1.125 million .
·      Psychological Benefit (Pb) – Low.
·      Cost of Crime Perpetration (Ocp) – Moderate. Their primary cost in perpetrating the attack was the opportunity cost of their time spent in planning and execution. 
·      Cost of Legal Defense and Incarceration – Moderate. Speaking on the indictment of the criminals, even the attorney responsible for prosecution was impressed they were able to solve the case. “The charges brought against this highly sophisticated international hacking ring were possible only because of unprecedented international cooperation with our law enforcement partners.”[2]


Monday, January 20, 2014

Employees - Why So Many Accidental Data Breaches?

According to DataLossDB, which has been tracking dataloss events since 2001, 20% of all data loss incidents are due to employee negligence and 8% were due to employee theft of data.
Why so many employee data loss incidents? To evaluate accidental data loss, let's look it the problem from the perspective of the data. In this case, let's consider data that is stored in a data warehouse. 

The data warehouse security is locked down and access is restricted to a small set of data analysts and administrators. Of course, the data exists to be used to support business decisions. To that end, a business analyst who fancies herself good at twisting data around in Microsoft Access gets the database administrator to dump a set of data for her for use in a financial product analysis. She does her primary work on her workstation, but being a high performer she stores the data on an external drive so she can work on it at home off hours. She discovers some very interesting patterns that support creation of a new financial product. She emails her analysis and the supporting data to a half-dozen people. Three of the recipients save the reports to their unencrypted laptops. One person fetches the document from home using his home computer through the company Outlook Web Access mail system. And another just forwards the email to her personal email account to read from home.

From its secure origins in the warehouse, the data quickly spread to numerous unsecure and unauthorized systems.

Here are a few real-world examples that show our hypothetical scenario isn’t so hypothetical:
·      April 3, 2009 – An Oklahoma Department of Human Services laptop containing the personal data for one million was stolen from an agency employee’s car.[1]
·      June 18, 2007 – A Texas A&M professor lost a USB flash drive containing personal identification information of 8,000 students while on vacation in Madagascar. The professor claimed he took the data so he could do work while on vacation.[2]
·      September 2008 – An ADP employee accidentally sent a spreadsheet containing the personal identification information of a client’s employees to the wrong client.[3]
·      October 2006 – The Republican National Committee inadvertently emailed names and social security numbers of top donors to a reporter.

Here is a link to the DataLossDB statistics. Good stuff there.


What is Threat Analysis?

Threat analysis is the process of determining the likelihood of harmful things occurring to your assets – who will do what to what systems. This information, coupled with value of each of your systems, forms the basis for making sound security decisions.

A threat is an indication of an impending event that is harmful.[1] Something that is impending and harmful to one entity may not be to another.  A 6.0 magnitude earthquake is harmful.  Whether an earthquake is impending or not is dependent on location.  According to the U.S. Geological Survey there is a 90% probability of a 6.0 or greater magnitude earthquake occurring in the San Francisco Bay region before 2037. There is a 0% probability of a similar magnitude earthquake occurring in Bismarck, North Dakota, during the same period.  Earthquakes are a threat to those who live in San Francisco. Earthquakes are not a threat to those who live in Bismarck. Interestingly, with all the recent hydraulic fracturing in North Dakota, the USGS may have to reassess their Bismarck earthquake assessment.

Just as threat of earthquake varies by geographic location, information security threats vary by entity and by asset. Consider the simple example of the threat of customer account takeover through stolen customer authentication credentials for a bank and a local auto repair shop. The threat is real and pressing for the bank if they have an online banking system, but it doesn’t even apply to a local auto repair shop.  Even for two financial institutions, the threat significance could differ based on factors such as the type of data and the transaction capabilities of their online banking system, the size and profile of their base, and even the geography they serve. For banks, the large institutions often see threat activity years before the small ones do. The threat applicability and significance differs based on the organization and the asset in question.


Friday, January 17, 2014

Cybercrimes of Passion

Not all cybercrimes fit in to the Criminal Behavior Cost-Benefit Model. Some just don't make sense.

On June 13, 2008, Terry Childs, a network administrator for the City of San Francisco, was arrested for not providing administrative passwords for the City’s Fiber Wan network infrastructure after being disciplined at work. For eight days San Francisco had no system level access to the infrastructure responsible for carrying 60% of its network traffic. The access was restored only after Terry told the Mayor of San Francisco the passwords to the systems during a private meeting in the prison where he was incarcerated.[1]

The cost-benefit formula assumes a rational thinker. Not the case here.
  • Monetary Benefit (Mb) – Nil.
  •  Psychological Benefit (Pb) – High (short term). Once the court records are made public, I suspect we’ll learn that Terry, a CCIE, had a long-time poor relationship with the management staff and that he didn’t feel that anyone but he should have admin access to the network.
  •  Cost of Crime Perpetration (Ocp) – Low. He already had admin access to the network infrastructure.
  •  Cost of Legal Defense and Incarceration – Very high. Prosecution and incarceration were imminent – all facts attributed the crime directly to Terry.

The ‘irrationals’ represent a very small portion of the system hacks, but they are out there and they are very bothersome. Perhaps the people that scare us the most are the ones that we can’t explain.