Statement by Marc Rogers on the CTI League

Yesterday I provided testimony to Congress about the CTI League and addressed the allegations that it is somehow part of a government censorship apparatus. Part of that testimony is in our statement linked below.

Response to the censorship allegations about the CTI League (CTIL) made by Mr Taibbi & Mr Shellenberger.

Ohad, Nate, Chris, and I created the CTI league on the 14th of March 2020 because we knew that we were facing an unprecedented global health crisis. It’s no secret to anyone that works in cybersecurity that global issues lead to global surges in cybersecurity incidents. The threat actors behind these incidents know that they can exploit the fear, uncertainty, and doubt that surges during these times. They weaponize the opportunities it provides and they attack those who are most vulnerable.

We founded the CTI League in order to use our skills to protect the infrastructure and first responders we knew to be fighting to keep us and our families safe. We founded the CTI league because we believe it is every citizen’s duty to do whatever is reasonably possible to support their fellow citizens.

Yet some actions needed to protect against cybersecurity threats are best taken by lawfully authorized government agencies or industry groups with existing connections to victim organizations. Success against sophisticated threats depends on collaboration like these  – for example you can imagine the response we got when we, as unknown individuals, called hospitals being targeted by foreign nation states. Our partnerships with agencies such as CISA and with the global law enforcement community was no different from those  with the World Health Organisation, The Health-ISAC, countless national Computer Emergency Response Teams (CERT), and many medical NGOs. At no point did this collaboration involve CTI League being told what to do, they were guests in our collective. The scope of their involvement was limited to working on medical sector threats and nothing else.

To help settle this matter, the CTI League is releasing all of its files. While they aren’t very interesting,they show that we are not a shadowy instrument of censorship, that disinformation was only a tiny fraction of the work we did, and most importantly, that we have not and will never endorse censorship. Rather, we are a collective of well-meaning cybersecurity professionals who worked to fight technical issues during a period of great turmoil.

CTI League Github containing our documents including all those leaked by the alleged whistleblowers. Please note we have redacted names. Individuals named in the irresponsible reporting of these false allegations have started receiving death threats. So to reduce harm we took this step. Unredacted versions have been offered to responsible journalists for comparison.

COVID-19-CTI-LEAGUE 
8 repositories, 2 followers. 

Through the collaboration within the CTI League over the last several years, we saw organizations warned of imminent ransomware attacks, we disrupted hostile nation attacks against critical medical infrastructure, and we worked together to protect ordinary people. People who in many cases were already experiencing some of the most challenging days they have faced. I am proud that we were able to serve our countries this way.

During the House Judiciary hearing on Weaponization of the Federal Government the idea that billion dollar hospitals would need help from a bunch of volunteers was ridiculed. I wish that was the case. Sadly we live in an age where currently the threat actors have the upper hand. A security team has to get everything right, all the time. A threat actor just has to find one opportunity once. It’s an asymmetric battle and the defenders are on the losing side.

Hospitals are amongst our most critical infrastructure, protecting the most vulnerable of the citizens in our society. As evidenced by the huge number of publicly known ransomware incidents affecting healthcare over the past few years (over 1,200 globally, nearly 900 in the US) hospitals, regardless of the size, are exposed, vulnerable, and targeted by bad actors. 

Healthcare saves lives. Therefore, lives are lost when that care is delayed, degraded, and denied by cyber attacks. 

There is no “Cyber 9-1-1”. During an unprecedented strain on the public health system, the CTI League played a key role in preserving care delivery – and therefore life. 

The CTI League has, and will continue to, protect speech protected by the first amendment; and has and will continue to oppose censorship. The documentation we published today shows this, as has been known to those who worked these cases, who gave up nights and weekends to protect others, and even by those who have recently fixated on snippets out of context to further their own agendas. 

The conversation about boundaries is an incredibly important one. US Government agencies and law enforcement organizations already have clear guidelines framed by the constitution and established law. The world is evolving rapidly, and if new lines are to be drawn it must be after careful discussion of the goals and the risks. As such It is good that this conversation is happening publicly. However, I am dismayed to see the value and benefit of these discussions destroyed by inflammatory comments and reactionary proposals.  It worries me greatly that these comments and proposals are being driven by speculative, poorly researched, intentionally divisive stories meant to further Political and financial agendas . 

We absolutely should be looking closely at where the lines should be drawn with regards to any kind of government involvement in our day to day lives. We absolutely should make sure that our rights are not infringed, whether deliberately or through well-meaning ignorance. However, we need to be very careful that while we perform this necessary introspection, we don’t damage the very fabric of what protects us.

I want the government to tell me and my service provider if a hostile nation is attacking my communications accounts and services. I want the government to take steps to prevent malicious entities from abusing our businesses. I worry that if we are not careful, we risk leaving ourselves vulnerable and exposed at a time when we know that hostile actors, both criminal and from foreign nations, seek to cause great harm. It’s clear that we need to carefully draw the lines that everyone should safely operate in. However, it’s just as equally clear that we can’t do this by burning everything down and starting again.

If we take our eye off the threat landscape for even a moment, the most vulnerable in our society will suffer the most. Those organizations with massive vulnerable networks, those organizations that fall below the cybersecurity poverty line, and those organizations that represent our most critical services are under constant attack and are already struggling.

As surprising as it may seem to some, altruists, volunteers, and ordinary citizens make a difference every day in cybersecurity. I look forward to continuing my work in the CTI League and several other similar organizations with my peers to relentlessly pursue the work we do to preserve care delivery for patients and their families.

 

CTI league logo

Why Contact Tracing technology keeps me awake at night.

contact tracing image

As a hacker for most of my life, I take a keen interest in technology, especially technology that is likely to impact my freedoms and the freedoms of people all over the world. Contact Tracing is incredibly important, but there are critical conversations we are missing that need to happen sooner rather than later. Without these conversations, we risk setting back advances in civil liberties by decades. Even worse, we could empower the enemies of free societies, such as dictators and authoritarian regimes, in ways that could result in mass genocide.

Contact Tracing is a key element to most country’s plans to re-open their economies. As they get control of the spread of the disease, being able to identify new clusters and swiftly contain them is critical to their success.Without a widely available vaccine, ignore new infections, whether from visitors or previously undetected carriers, and you end up right back at square one. Traditional methods of contact tracing haven’t evolved much in hundreds of years. You ask the patient where they have been and attempt to use detective skills to find every possible contact. It’s very labour intensive and not very accurate. Miss one handshake, or a quick stop to grab coffee and entire clusters slide through the net. This is aggravated by the fact that you are likely interviewing someone who is feverish and sick, who may be struggling to breathe.

It’s no surprise that in this world of big data, governments are increasingly turning to technology to fill in those gaps and come up with a more reliable method of tracing infections. Unfortunately, this means developing or leveraging existing surveillance tools. Want to know where someone has been? Surveillance was developed specifically for this purpose. Not all solutions have been created equal. Some countries are attempting to achieve this by working with existing surveillance infrastructure, from CCTV to mobile phone monitoring and even human intelligence (HUMINT). Almost all of these solutions have significant shortcomings, from an inability to see where there are no cameras to uneven technology distribution and areas of poor or no signal coverage. As for HUMINT, people are notoriously unreliable as witnesses and as a solution it is relegated to always being reactive, never realtime. However it’s not these flaws that keep me awake. I’l get into that in more detail later.

Other countries have spotted the flaws discussed above and realized that the best solution is one carried by the target and which is relatively independent of infrastructure to track interactions. This has led them to mobile phones. Nearly everyone has one, they are almost always switched on, and they support a wide range of protocols and wireless signals. In many ways the mobile phone is a gift for surveillance. This is why law enforcement has increasingly relied on devices like “Stingrays”, also known as IMSI catchers, because they track targets based on the IMSI, a type of mobile identity number transmitted by all phones. IMSIs are unique, actively transmitted and relatively easy to listen for passively. However, as a technology it too has flaws. First, you have to know your target’s IMSI and second you have to correlate it with other IMSIs in the area. Even then the correlation does not prove interaction. Just that two devices were in the same place at the same time.

To improve on IMSI catching, groups working on Contact Tracing realized they need a way to

  • Confirm interaction to a greater degree, including how close the targets were.
  • Identify how long that interaction was for.
  • Optionally confirm where this took place.

This has resulted in several reference designs, the two most prominent of which are the OpenSource BlueTrace technology that has been used to create apps like TraceTogether in Singapore and COVIDSafe in Australia, and the Exposure Notification framework being developed by Google and Apple in partnership. Both frameworks have clearly been designed with privacy in mind. They offer cryptographically secured data, rotating keys and rotating IDs to limit 3rd party tracking of users. Both are also currently heavily consent driven, requiring a user to consent to upload their encounter logs for analysis. The Google/Apple Exposure Notification framework offers some significant advantages over the BlueTrace design. Advantages like being a truly decentralized solution with key management taking place on the mobile devices and the lack of a central server storing keys and PII (mobile phone numbers). However, these aren’t really keeping me awake either.

What keeps me awake is two things.

First is the fact that in many countries we have suspended or bypassed traditional privacy legislation and oversight. Previously to track someone to this level you would have needed some form of official warrant likely backed up by substantial probably cause. While you could argue that the potential spread of a deadly infectious disease could amount to probable cause, it is the lack of process inherent in these systems that worries me. The challenge for policy makers is that these systems have to have as little friction and thus process as possible in order for them to work.

What we should be doing is talking about how these loopholes will be closed or managed once the pandemic has run its course. These technologies exist now and the precedent for using them also exists. Unless we build a cage around that use, we are setting ourselves up for a very slippery slope in years to come. Just what is the threshold that makes such close surveillance acceptable further down the line? Do we use it for the next pandemic? How about measles? The flu? Terrorist groups? What is the new process for taking this action and where is the oversight?

The second thing that keeps me awake is that we are building surveillance technology. As discussed above that’s because we have to, if we want this to be effective. Yet this has the potential of being one of the most powerful pieces of surveillance technology created in decades, if not in centuries. The actual implementations we know about have been designed well and are some of the most privacy-centric mobile designs I have seen in a long time. However, it’s not the design that gives me the most concern, it’s the use. The point is, they are surveillance tools designed to track who comes into contact with who, how long they were in contact, and who they came into contact with next. They do this really really well. This technology is like an IMSI Catcher/Stingray on steroids. You can identify a single person and watch almost virally as they connect with people to an almost infinite number of degrees of separation.

We have seen the power of Bluetooth-based tracking beacons in commercial use, from tracking shoppers, to helping with navigation inside buildings, and marketing. The detail you can get from this kind of tracking would shock most people. There are commercial Bluetooth tracking companies that drive around slurping up Bluetooth signals that can tell you where the owners of specific devices live, work, and shop. This new contact tracing technology represents the same risks to privacy and more – from just one person, you could identify every member of a terrorist cell, you could make it incredibly difficult for criminals to hide their associations, or track future diseases. These are all potentially good things. However, on the flip side, the same technology can be used to track innocent people, just to identify who they associate with, or dissidents and journalists as they interact with their sources.

So what do we need to do? Let me be absolutely clear as someone who studied biological sciences for a big chunk of his life, and as a technologist and a father – I believe that robust contact tracing is an important tool for us to develop and use in combating infectious diseases. Rapid, accurate identification of infected people would have prevented the extent and duration of the shutdowns we are now experiencing. With other more visibly symptomatic diseases, it could enable a quick, surgically efficient quarantine to be established and maintained. However, we need to talk about how it’s going to be used going forward. We need to discuss the processes involved in activating these technologies, especially when they become part of the very operating system of our phones. We need to discuss who gets to hold the keys and what oversight there will be for activation and use. We also need to talk about the risk of abuse and how we in more free societies will protect the less advantaged from that kind of abuse. Most of all, this needs to happen transparently and in a way that all concerned stakeholders, including the public can be part of this process.

There are ways that this technology can be managed responsibly. For example Apple and Google can reserve the right to deactivate this technology if it gets abused in any region. Likewise Apple and Google are US companies, so US lawmakers can play a significant role in developing appropriate oversight for this tool. However nothing will happen unless we all force this conversation and ensure that it happens soon.

As a kid I grew up with comic books, and am quite fond of the phrase – “With great power comes great responsibility.”. As a hacker I grew to believe that knowledge and information is power. What we have here is something that intersects both these principles.

 

How we Hackers and Makers can help save lives during the pandemic.

Across my timelines I see many of my friends looking for ways to help others. You fill me with joy.


First and foremost a few thoughts on mental health.

1. The internet is our home. We above all in society are best equipped to handle physical isolation . We have huge support networks and deeply close connections.

2. Most in society are not like us. We can help them. I see non internet friends panicking about what they can do in isolation and their fear is visible. My internet friends are posting music lists, books, games and organizing virtual get togethers.

3. We should use this skill to help those who need help. Support them and show them there is a way to have a human connection. Keep an eye on your friends and neighbors. Community comes in many forms and we humans need community like we need food or water.

4. Make groups and organize events invite non internet friends and keep inviting them even if they don’t come. It will take many some time to adjust. However sometimes just reaching out can make all the difference.

5. Use everything we have learnt in our bitter experiences of mental health in the community to watch everyones backs.

A few thoughts on making as a way to help.

1. Medical devices often have subtle complexities that can be harmful or fatal if you get them wrong. Don’t make things like respirators unless you can secure medical guidance to get it right. Instead as mentioned below, let the medical community tell you what they need and get the specifications, material and tolerances from them.

2. A better approach would be to organize a “supply chain”. Organize a group of makers that can build a wide variety of parts, establish a comms channel and loose organizational structure then be ready to make whatever is needed. Maybe its valves, maybe its hose parts, being dynamic and able to adapt to medical led need is likely to be far more useful than printing thousands of something that may only be for a specific case or brand of equipment (that might not even be the same here).

3. Our strengths are numbers, agility and creativity. Leveraging all of those should be our goal. We can save lives.

Follow-up to my Open Letter.

Can you believe it has been more than four weeks since DEF CON wrapped up? Im just shaking the ConCrud and still wake up every night at 4am wanting a status update. Anyway, I wanted to follow up, clarify a few things, and let you know what the next steps will be.

This blog post will focus entirely on the DEF CON side of things. A blog in the not too distant future will look at tackling the much bigger problem of these security measures going national. I don’t know about you, but I don’t want to see our hotels become like our airports.

Over the last couple of weeks I have received thousands of messages of support. So many that my phone has had multiple fits. The overwhelming message was “You don’t have our permission to quit, we want to support you in fixing this.”. Got it.

Thanks for the vote of confidence. It’s humbling in many ways to know you are behind me. I received messages from hackers from all over the world, even those I consider my personal heroes.

So, I have much work to do.

By now you will have read on the DEF CON site about our meeting with Caesars on the DEF CON site. It was generally positive but I consider this the first, baby, step on a very long road.

First they clarified that the detailed policy statement released during closing ceremonies ones was not given to either me or DT before the event. It became clear that it was received by DEF CON just before closing ceremonies at 4:11pm. Not “months in advance” as some believed. Prior to this, DT assured me that other DEF CCON staff received little more than the info already made public in the Las Vegas Review Journal in February of this year:

https://www.reviewjournal.com/business/casinos-gaming/caesars-to-check-all-occupied-rooms-on-daily-basis/

Now, lets unpack that policy statement and how it applies to what happened:

1. Setting “do not disturb” or “room occupied” hanger on your door for “too long” triggers a security visit. What the exact parameters are remain unclear. Registration told me it was three days but I know of rooms inspected after much shorter times.

2. A security visit is supposed to be carried out by two hotel security staff. Staff are supposed to be clearly identifiable as such by both uniform and ID tag. They are supposed to announce themselves to any possible occupants and provide ID when requested. They are supposed to be using their own access cards, not ones from housekeeping or other departments.

3. All hotel security staff can be validated by calling Ceasar’s security dispatch. The dispatcher should be able to confirm the name, description and ID for the officer. They should also be able to confirm that this is a genuine security check.

4. A security visit should ONLY constitute a visual inspection of the room. Guest’s belongings should NOT be interfered with, drawers and cases should not be opened etc

5. If an officer notices a “red flag” then they are supposed to place a radio call to dispatch for instructions.

One last point from my own experience of working with Caesars over the last few years, their security team security handles incidents In very specific ways and have done for a long time.

For example, If an item of contraband is found (as in the case involving a pair of lock-picks) then they have a very specific protocol:

  • They have been using this protocol for years as it is as much to protect their officers as it is to maintain chain of custody on the item.
    • they secure and lock the room,
    • they attempt to locate and inform the guest
    • when the guest returns they identify the item visually with security staff and the guest present.
    • The item is then secured in the presence of the guest and held by security, or the matter is escalated per local laws. This means they should not “just take” any items from your room.

So what are some of the things we learn from all this?

1. Missing items with no explanation should be immediately reported as stolen.

If your soldering iron, medicine or gadgets vanished, they were likely taken without any authority and security is the last group that comes to mind. Hotels have had room theft problems for years.

The complete randomness of the items reported suggest to me theft more than policy based confiscation. Hotels have a diverse burglary ecosystem, from people using stolen keycards, to corrupt housekeepers. They even street gangs called “knockers” who walk through the hotel testing doors to see if they have been closed properly.

Pro tip: Knowledge of when something went missing is critical. All Vegas hotels have CCTV and with a time window they can likely identify the theft and make a recording of the thief. I have personally helped hotels track down thieves using accurate theft reports and good CCTV. I expect no less from Caesars

You should report any missing items as stolen NOW.

LVMPD has an online link that allows you to report stolen property:

https://www.lvmpd.com/en-us/RecordsFingerprintBureau/Pages/FileAReportOnline.aspx

Wile it probably won’t get your stolen item back, you will need it for insurance purposes, and more importantly getting as many of these filed as possible will give law enforcement a handle on the size of the problem. This is something they need to fix before something terrible happens to one of their guests, drawing attention to it is one of the best ways.

2. The officers who messed with property, opened drawers, opened suitcases or fiddled with items were clearly in breach of their own policy as stated. We will work out what this means as we continue our discussions with Caesars.

All of this this is something I personally hate as a security practitioner:

  • The badly implemented and apparently poorly communicated policy.
  • The huge variance in reports on what security guards did or looked for showing an apparent lack of consistent strategy or training.
  • The ineffectiveness of their approach.

Just like many others in the Infosec industry, I have had some specific training in this area. If you want to find observable flags that indicating threats of any kind, you need tried, tested, consistent techniques and consistent training.

I’ve heard of rooms where spent brass from shooting ranges (empty bullet casings) were left on the table but instead the guards walk around, photographed nightstands, luggage and personal papers. That implies to me they were either not properly trained in what to look for or didn’t really care.

3. Officers who failed to identify properly, or who showed damaged or illegible ID are also in breach of this policy as stated. This is a disaster waiting to happen. Most of us could duplicate those ID cards with a few minutes access to a photocopier. This is a disaster waiting to happen. How longer before a bogus security guard with dubious ID demands access to a lone guest’s room for criminal purposes? By doing this, hotels are training their guests to be victims.

4. Officers who “just walked in” without announcing themselves are also in breach of policy and even more concerning. I know of at least one lone female engineer who was disturbed while getting dressed and another guest who was interrupted while bathing. Its hard to wrap my head around how scared they must have been. These situations could have been far far worse. They cannot be allowed to happen again.

While most of the reports came from attendees staying at Caesars, it’s clear this was much bigger than just one hotel. I got reports over twitter from dozens of Las Vegas hotels. The most interesting thing was just how much variation their was in how this was applied. Some hotels phoned up, used it as an excuse to freshen the room and left chocolate. While others practically kicked the door down and acted like they were executing a warrant. Las Vegas has MUCH work to do if it wants to continue to be seen as a safe destination for hotel guests.

Also, despite claims to the contrary, there were definitely searches at Mandalay Bay. At least one of these was reported on in detail (along with several other incidents) by Seth Rosenblatt from The Parallax:

https://www.the-parallax.com/2018/08/12/vegas-hotel-room-security-privacy-defcon/

“A security researcher and Black Hat attendee who requested anonymity because of the sensitive nature of this story told The Parallax that a Mandalay Bay security employee threatened to bring “more people” to his hotel room door if he didn’t open it.”

Finally, while K9’s units were also spotted at the Venetian and Wynn hotels, a quick search on the internet shows that these units have been in place for several years. The fact that ALL the K9 units disappeared from Caesars the moment the last DEF CON guest checked out just strongly implies to me that that was security theatre and not any serious attempt to keep guests safe.

If we are going to keep doing something like this (and thats a big if), as a minimum Las Vegas hotels need to get together and get on the same page. They need to build a transparent, auditable policy with full accountability for those carrying it out. Guards performing this duty must be clearly marked with industry accepted quality ID. It should also be possible to see who inspected your property and when. Lastly there need to be clear lines of escalation so that guests who have lost items or feel violated know who to call and what to expect.

The alternative is likely to be a series of class action law suits with the final outcome being Hotels potentially losing the flexibility they have under the 4th amendment.

5. It seems to me that thieves took advantage of the confusion and used it burglarize rooms with cloned staff key cards. I myself lost several thousand dollars in phones.

In most cases it was small valuable items, prescription medicines and cash i.e. it looks like it was tweakers. So please file a report with LVMPD thats the best way to get this on everyone’s radars

Missed opportunity?

Speaking as a hacker, this also feels like a HUGE missed opportunity for Vegas. No city knows more about the identity of its guests and their behaviours than Vegas. Most hotels know a huge amount about you before you even check into the hotel itself. Their entire economy is built on it. Why not use all that information?

Other countries and facilities have solved this type of issue much more effectively by combining data, people, processes and technology. We already live in a world where big data can predict your shopping habits down to what illness you have or whether you are shopping for a new baby – it seems like much more of this could be solved in an intelligent, data driven way, instead of the incredibly invasive yet ineffective manual processes we face now. Many of these kind of vetted ID systems are already in place in vegas for other purposes. Adapting them could be an excellent way forward.

In my opinion everything we experienced was largely security theatre. Even the Caesars K9 units were all gone by the time the last DEF CON attendee had checked out. I do not feel it would do much to protect guests from active threats such as the one faced last October. If these hotels really want to protect their guests, then they couldn’t do better than engage the hacker community in a positive way rather than an adversarial way.

Over the next few months we will be working to get a transparent, codified policy so that we know exactly where we stand.

Personally I will be working with the Hacker community, EFF and ACLU to look at better ways to challenge these issues, and ultimately come up with better ways address the risks raised by October’s attack. We need smart, effective ways to keep our people safe or in the long run the terrorists win anyway.

Meanwhile, lets see how Vegas handles the next shot show…… who wants to bet this all goes away just time time for it?

Open letter to the Hacker Community.

An open letter to the Hacker Community.

I have been known by many names. Cyberjunky, Cjunky, CJ, Marc. I am a Hacker and a father. These two things shape my life and generally fill it with joy.

I have done many things in my time on this blue marble flying through space. I have done things I probably shouldn’t have, like being somewhat responsible for the first case of “Hactivism” (yes my CDC brothers I know you coined the name), through to owning most of the internet through the 90’s. Using # as the first character in hosts.equiv was probably a bad idea when people could easily compromise DNS. (Sorry Dan, you weren’t the first). I and some close friends formed one of Britains first hacker gangs, “AoHP”. (According to the UK CCU in the 90’s, all Hackers were naive individuals being led by the agents of a hostile power).

I met my wife on IRC, we dated by causing massive netsplits and turning channels into monty python sketches. We’ve been married 20 years. I have no idea why she puts up with me.

Marriage and kids caused a tectonic shift for me. I both realized I needed to grow up, and that I also had a gift that could change the world. I rediscovered the meaning of “Hacker” and joined the community that now means more to me than life itself. Only my wife and kids come first. I learnt that hacks could be constructive. I’ve lost count of the things I’ve hacked and fixed. From watches and glasses to cars and supercomputers, I am bound by a code of ethics and rather than finding it constraining, I have found it liberating and empowering.

DEF CON for me is the spiritual home of that community. My hacker family. I come every year with one goal in mind – to make it better, safer, more welcoming, more transparent. My secondary mission is to show that my brothers and sisters are a critical part of the future. Rather than strange people to be feared, we are a skilled workforce who can do everything from making the world a safer place, industry more intelligent and even save lives.

Instead of being arrested for this gift, I want to see it celebrated. This can only be achieved through trust, collaboration and community.

This year started on an amazing (but exhausting) high. My awesome team launched the first hacker conference helpline to keep attendees safe. We intercepted and resolved harassment issues. We built an aura of trust where people in serious crisis trusted the SOC goons to come forward and ask for help. Diversity is at higher levels than ever before. I couldn’t be prouder.

Then everything changed.

These recent events at Ceasars have broken my heart. I am fierce advocate for privacy. I understand why hotels need to adapt to a new threat vector. I had friends caught up in the horror of October. However, as a security practitioner and hacker, I hate security theater.

I knew about the searches from articles in the press. However I knew NOTHING about this:

http://twitter.com/kevincollier/status/1028781113160990726

If I had received this, in the interest of transparency, I would have informed you all. After all, that’s EXACTLY why I started the DEF CON transparency report. The timing of it looks odd.

I do not support or endorse these room searches or how they are executed. I sympathize with the challenge these hotels are facing but believe they need to take a harder look at the efficiency, impact and long term cost of this strategy.

We MUST NOT let our hotels become like our airports. If we do, then the terrorists win.

Similarly a hotel guest should have the right to feel safe and secure in their hotel rooms. If filming a reporter through the lens in her hotel door is a crime, invading someone’s room should be several orders of magnitude worse. Yes hotel staff NEED to check rooms but if we, who can change the very building blocks of life, can’t come up with a safer way to do this then we really aren’t trying hard enough.

I NEED your support to change things for the better because without you, my community, there is no point.

So I offer you my resignation. By not being aware of Caesars’ statement I failed you. I WILL not let this happen again. However if you no longer feel I am the man to defend you, my community, then I will leave. I suspect much of my team will leave too but….plus ça change.

I’ll leave you with some words I live by

“This is our world now… the world of the electron and the switch, the

beauty of the baud. We make use of a service already existing without paying

for what could be dirt-cheap if it wasn’t run by profiteering gluttons, and

you call us criminals. We explore… and you call us criminals. We seek

after knowledge… and you call us criminals. We exist without skin color,

without nationality, without religious bias… and you call us criminals.

You build atomic bombs, you wage wars, you murder, cheat, and lie to us

and try to make us believe it’s for our own good, yet we’re the criminals.

Yes, I am a criminal. My crime is that of curiosity. My crime is

that of judging people by what they say and think, not what they look like.

My crime is that of outsmarting you, something that you will never forgive me

for.

I am a hacker, and this is my manifesto. You may stop this individual,

but you can’t stop us all… after all, we’re all alike.

My Thoughts on FaceID

FaceID keeps on getting more and more interesting. According to Vietnamese Security firm Bkav, they were able to unlock the device using a 3D printed mask, hand made prosthetic nose, and 2D printed eyes.  This seems unlikely, especially given what Apple has said previously about testing they did to secure FaceID.

What seems more likely is that Bkav were able to “weaken” the Neural Engine’s model by training it against the mask, essentially poisoning it and creating a backdoor. This is a security weakness in its own right and I’ll go into more detail why a little later on in this article. However if my suspicions are correct, then this wasn’t a true break of FaceID but rather a partial break that required prior knowledge of the device’s passcode. It’s still a clever hack, and it introduces some additional questions, the biggest one being why the “liveness test” failed to detect that these were not real eyes. Even with “require attention” turned off, to be truly secure FaceID needs to be able to tell the difference between a human and an artificial construct. Accurate liveness testing is a fundamental foundation all biometric services should be built on.

What we’ve learnt so far about FaceID…..

FaceID uses an InfraRed emitter/camera to project 30,000 dots at the subjects face. This is possible to see, and record, using a DSLR with a modified IR filter, as can be seen in this rather neat video on YouTube. Using these dots, FaceID maps your face and feeds the information to its “Neural Engine”. This machine learning model essentially trains on your face and gets better and better with time.

FaceID doesn’t need your whole face to unlock your phone. Some parts of your face appear to be optional, while others appear to be mandatory. As far as I can tell, the mandatory part is based on the triangle made by your eyes and nose. This is almost certainly to reduce the false negative rate – the measure that access control systems use to judge how many people have been incorrectly refused access. If you think about it, the eyes and nose are the most static parts of your face. Everything else – hair, ears, lips, chin/beard etc – can change daily and are the most variable in different environmental conditions. Rumored production issues with the infrared parts of the phone may have also played a role. This is perfectly reasonable as the balance between false positive and false negative rates can be the difference between a viable, secure biometric system and a complete failure. However, it does significantly reduce the complexity of what an attacker needs to do. Building an accurate map of a small part of a target’s face is much easier than the whole face.

It’s possible to unlock someone’s phone by showing it to them so they glance at it. This seems to work face on, or at an angle but it does not work from the side. For me this is the most troubling weakness in the FaceID system. If all a thief has to do is get you to look their way after snatching your phone then its not a very hard attack to imagine. The best defense appears to be to keep your screen-lock time out set very low. As anyone in forensics will tell you, it’s really hard to keep something on a timer unlocked indefinitely.

 

It’s also possible to poison the Neural Engine’s model. This is what I suspect the Vietnamese security researchers from Bkav cleverly did. I think this is also the primary reason behind the frequent reports of close family members unlocking phones. Their facial structure is already similar so it probably only requires a few instances of retraining to pollute the model with attributes from another person, weakening the accuracy of the model, ironically, the exact opposite of what this mechanism is supposed to do.

 

From the testing I did with Wired, it’s clear that the way a material or surface scatters infrared is critical to getting the phone to unlock. If it doesn’t behave like human skin then it won’t, and shouldn’t, register with the phone. However this may have also helped the researchers from Bkav. The 3D printed parts of the face may not have registered at all with the phone, however the position of the eyes, held by the frame, and the depth of the handmade nose would have been critical. This I suspect will be key in any future unlocks.

 

In Conclusion …..

As a hacker I firmly believe anything man makes, man can break. Someone will create a way to fool facial recognition, then someone will improve it and so on. The only question we should be asking is whether or not it’s secure enough. None of the attacks we have seen so far are very practical. Who sits still for 5 minutes so someone can record their face? That said, if FaceID’s own enrollment process can be completed in a few seconds, so can an attack. It still leaves this in the realm of high-end targeted attacks, so you probably don’t worry unless the men in black helicopters really are out to get you.

However, there are still four things that leave some cause for concern –

  • How easy it is for someone to snatch a phone and use your face to unlock it
  • The apparent failure of the liveness testing to tell printed eyes from real ones
  • The amount of your face used to unlock the phone
  • The flaw in the neural engine that allows the model to become polluted

Some of these are easy to fix, some are a little harder. Knowing Apple they are already working on improvements that will mitigate some of these. ……Just as they know we will be using what we know to improve our attempts to defeat it. 🙂

 

 

 

 

Will the madness never end? Komodia SSL certificates are EVERYWHERE

komodiaSo, as people have started turning over stones, looking to see how common these Komodia certificates are, some surprising (and depressing) things are beginning to surface.

  1. It does appear that Komodia is behind this.
  2. It appears that Komodia uses the same framework for many, many products. Here’s some that have been found so far:
    1. Komodia’s “Keep My Family Secure” parental control software.
    2. Qustodio’s parental control software
    3. Kurupira Webfilter
    4. Staffcop (version 5.6 and 5.8)
    5. Easy hide IP Classic
    6. Lavasoft Ad-aware Web Companion
    7. Hide-my-ip (note: this package does not appear to utilize the SSL MITM, and the certificate is slightly different from the one found in other packages however it still utilizes an unrestricted root certificate with a simple plaintext password.
  3. The password is always “Komodia”
  4. The certificates are always weak; the private key is always bundled with them (of course it is).

I think that at this point it is safe to assume that any SSL interception product sold by Komodia or based on the Komodia SDK is going to be using the same method.

What does this mean? Well, this means that those dodgy certificates aren’t limited to Lenovo laptops sold over a specific date range. It means that anyone who has come into contact with a Komodia product, or who has had some sort of Parental Control software installed on their computer should probably check to see if they are affected.

This problem is MUCH bigger than we thought it was.

SSL Ciphers –

The next issue is that these “proxies” do not correctly implement SSL. They negotiate with the origin server on your behalf – so when you want to connect to your bank site it is this software that negotiates for you. However it does this badly. First of all it supports a range of ciphers including a number that had been deprecated years ago and others that simply should not be used – for example:

Screen Shot 2015-02-20 at 9.32.55 PM

This means the Komodia proxy software may potentially establish – or be forced into establishing – a weak SSL connection using breakable ciphers. This weak connection could then  be tampered with or eavesdropped on by hostile actors. the user will be completely unaware, and even if he inspects the status of his SSL connection all it would show is the strength of the connection between the browser and the Komodia software, not the connection going over the Internet.

In one move this software trashes the last decade of browser security and privacy work, and the last five years of SSL cipher management.

Certificate validation –

The next problem is that it does not appear to validate certificates properly. If a certificate fails validation, the proxy still lets it through and signs it with the Komodia certificate, but it changes the name on the certificate to “verify_fail.<whatever the original domain was>”  so for example if you browse to the self signed certificate on events.ccc.de you will see a Superfish signed certificate for verify_fail.events.ccc.de. its a cheeky attempt to cause a browser warning by having the name mismatch.

However as discovered by many, if you put the correct name into the “Alternate Names” field of the certificate the software will blindly copy that name across and when it changes the primary name, the Alternate Name field will still match the expected domain and the browser will believe the certificate to be valid. Epic Fail.

More details here: https://blog.filippo.io/komodia-superfish-ssl-validation-is-broken/

If you are a parent that has installed parental control software – in particular the ones named above – I would check to see if your computer has been affected by this, as a matter of urgency –

https://filippo.io/Badfish/

UPDATE: @filosottile has updated his test to check for the new certificates.

If you have come into contact with any Komodia product, I would check for unrestricted private root certificates, before carefully removing them and the associated software from any system that you care about.

Thanks to @thewack0lian for pointing this out! – https://gist.github.com/Wack0/17c56b77a90073be81d3

Thanks also to US CERT for additional vulnerable packages – http://www.kb.cert.org/vuls/id/529496

Lenovo installs adware on customer laptops and compromises ALL SSL.

boa2

A pretty shocking thing came to light this evening – Lenovo is installing adware that uses a “man-in-the-middle” attack to break secure connections on affected laptops in order to access sensitive data and inject advertising. As if that wasn’t bad enough, they installed a weak certificate into the system in a way that means affected users cannot trust any secure connections they make – TO ANY SITE.

We trust our hardware manufacturers to build products that are secure. In this current climate of rising cybercrime, if you can’t trust your hardware manufacturer, you are in a very difficult position. That manufacturer has a huge role to play in keeping you safe – from releasing patches to update software when vulnerabilities are found to behaving in a responsible manner with the data the collect and the privileged access they have to your hardware.

When bad guys are able to get into the supply chain and install malware, it is devastating. Often users find themselves with equipment that is compromised and are unable to do anything about it. When malware is installed with the access a manufacturer has, it buries itself deep inside the system – often with a level of access that takes it beyond the reach of antivirus or other countermeasures. This is why it is all the more disappointing – and shocking – to find a manufacturer doing this to its customers voluntarily.

Lenovo has partnered with a company called Superfish to install advertising software on it’s customer’s laptops. Under normal circumstances, this would not be cause for concern. However, Superfish’s software has quite a reputation. It is a notorious piece of “adware”, malicious advertising software. A quick search on Google reveals numerous links for pages containing everything from software to remove Superfish to consumers complaining about the presence of this malicious advertising tool.

Superfish Features:

  • Hijacks legitimate connections.
  • Monitors user activity.
  • Collects personal information and uploads it to it’s servers
  • Injects advertising in legitimate pages.
  • Displays popups with advertising software
  • Uses man-in-the-middle attack techniques to crack open secure connections.
  • Presents users with its own fake certificate instead of the legitimate site’s certificate.

This presents a security nightmare for affected consumers.

  1. Superfish replaces legitimate site certificates with its own in order to compromise the connections so it can inject its adverts. This means that anyone affected by this adware cannot trust any secure connections they make.
  2. Users will not be notified if the legitimate site’s certificate has been tampered with, has expired or is bogus. In fact, they now have to rely on Superfish to perform that check for them. Which it does not appear to do.
  3. Because Superfish uses the same certificate for every site it would be easy for another hostile actor to leverage this and further compromise the user’s connections.
  4. Superfish uses a deprecated SHA1 certificate. SHA1 has been replaced by SHA-256 because attacks against SHA1 are now feasible with ordinary computing hardware. This is insult on top of injury. Not only are they compromising people’s SSL connections but they are doing it in the most cavalier, insecure way possible.
  5. Even worse, they use crackable 1024-bit RSA!
  6. The user has to trust that this software which has compromised their secure connections is not tampering with the content, or stealing sensitive data such as usernames and passwords.
  7. If this software or any of its control infrastructure is compromised, an attacker would have complete and unrestricted access to affected customers banking sites, personal data and private messages.

Below is a photo showing Superfish on an affected laptop presenting a fake certificate instead of the legitimate “Bank of America” certificate. As you can see the user is presented with the fake Superfish certificate instead of the legitimate BoA certificate.

BOA-Large certificate

The only way a user would know this has happened is if they check the certificate’s details. Something most ordinary users are unlikely to do to a certificate which to all other appearances is valid and secure.

As mentioned above, the certificate used by Superfish is a deprecated SHA1 certificate that uses 1024-bit RSA. This is particularly obnoxious because they have installed into the system certificates as an unrestricted trusted root certificate. To put it into context, they gave it the same level of trust and authority as Microsoft’s own root certificate. Users affected by this can go to any site on the internet, and so long as it presents this certificate, they will be fooled into thinking they have a secure connection. Since this certificate uses SHA1 it is feasible that an attacker could break it and hijack it. This means an attacker could create a bogus certificate that every one of these users would trust.

uperfish-certs

This is unbelievably ignorant and reckless of them. Its quite possibly the single worst thing I have seen a manufacturer do to its customer base. At this point I would consider every single one of these affected laptops to be potentially compromised and would reinstall them from scratch.

Lenovo’s response? Typical of companies caught with their hand in the cookie jar, they try to play it down while at the same time saying they have disabled it until it can be “fixed”:

https://forums.lenovo.com/t5/Lenovo-P-Y-and-Z-series/Lenovo-Pre-instaling-adware-spam-Superfish-powerd-by/m-p/1863174#M79882

However, it’s hard to see how they could “fix” this software. It’s core functionality undermines the security of SSL rendering the last decade or so of work making the web secure completely irrelevant.

UPDATE: –

It’s not often that things like this actually get worse. This one has. So, because the man-in-the-middle happens locally, it’s clear that the private key has to be bundled with the software. This is because in order to sign sites on the fly, the software has to do that with the private key.

This is really bad practice. What makes it even worse is that they used a simple dictionary word as the password for the key. After a little reverse engineering, it is possible to extract that key and crack the password.  Armed with the private key and its password, you can now sign websites and even software in a way that any affected Lenovo user will trust. What’s worse is you can do it under any fake name that you like. Want to sign a virus so that it looks like legitimate Microsoft software? Go ahead: this will let you do exactly that. Want to set up a fake banking site and pretend to be HSBC? Yup, you can do that too.

Who needs to crack SHA1 or factor RSA-1024 if all you need to do is extract the private key? Game Over.

What was the password they used? Komodia. Both a ridiculously easy dictionary word and the name of a well known manufacturer of SSL products. What’s the betting that they are behind this, and that their own SSL proxy products are designed in a similarly terrifying way?

super-03 super-04

Read more about the reverse engineering of Superfish on Errata Rob’s Blog –

http://blog.erratasec.com/2015/02/extracting-superfish-certificate.html#.VOY1QCmJndk

HOW TO CHECK IF YOU ARE AFFECTED:

My colleague Filippio – @FiloSottile has written a detector for superfish affected laptops. To test if you are affected simply click on the URL below:

https://filippo.io/Badfish/

WHAT TO DO IF YOU ARE AFFECTED:

Lenovo has published instructions for removing the app:

https://forums.lenovo.com/t5/Lenovo-P-Y-and-Z-series/Removal-Instructions-for-VisualDiscovery-Superfish-application/ta-p/2029206

I have checked them and they are sound BUT they do not provide instructions for removing the malicious certificate in the system certificate store.

To remove the certificate (which is the worst part) follow these instructions:

  1. Go to Control Panel and search for ‘certificates’.
  2. You’ll find yourself in Administrative Tools.
  3. Select “Manage computer certificates” ,
  4. click on the folder labeled Trusted Root Certification Authorities
  5. Click on Certificates.
  6. Find the one labelled “Superfish Inc”, right-click and chose to delete it

HT to @semenko @kennwhite @fugueish on twitter for the screenshots!

Wrapping up the whole Sony thing…. for now.

SIGINT

So a bunch of things have come out in the last week that honestly make further discussion about attribution pointless. Once again, we are dealing with things said at conferences or deliberately leaked to the media, but given the sources, we have to at least take them somewhat seriously.

North Korean Signals Intelligence (SIGINT)

This was always a “wildcard” that couldn’t be discounted. Given what we know of the NSA and PRISM, it’s hardly surprising to hear that their sensors were “in the right place at the right time” to record some or all of the Sony hack. So what exactly did they collect? Allegedly, they saw phishing emails sent from North Korea to Sony sometime in September. Furthermore, in retrospect, they determined that those emails had been successful in compromising the credentials of at least one admin.

It seems pretty solid. There are some things about this that trouble me, however.

  1. The first is that we are hearing this in the form of bits and pieces of leaked information, passed directly to the media. This seems an unusual strategy for the NSA or the FBI – especially the NSA who have nothing to gain (and everything to lose) from leaking details about classified intelligence operations. Especially now that the White House has accepted their conclusion and taken punitive action. Just who exactly are they trying to convince? Until I hear this from a reliable source who can answer a few technical questions, I am going to continue to be politely skeptical.
  2. The second is that they are saying they have evidence of North Korea hacking Sony based on intelligence they collected from hacking hacked North Korean systems.  These North Korean systems were already owned, most likely, by South Korea, and in most cases already had “implants” which were vulnerable enough that the NSA could either hijack them or share their usage with the original group that planted them. That doesn’t inspire me with a lot of confidence. Who is to say that whoever hacked these North Korean systems in the first place wasn’t messing with them? Who is to say that some other third party came along and also compromised these already compromised North Korean systems?
  3. Without accurate identification of the Sony hack vector, it is impossible to know for sure what role these connections played in the Sony hack. Maybe they started it. Maybe someone else started it and they followed in their footsteps. Maybe someone else did it and is quietly chuckling to themselves at now naive we all are.
  4. Finally, are we seriously punishing the North Koreans for hacking our infrastructure based on intelligence we gained from hacking their infrastructure? This does not feel like a righteous position to find ourselves in.

Facebook Connections

This is perhaps the first piece of reasonably solid evidence in the whole affair. Speaking to the press at  the International Conference for Cyber Security, Director Comey alleged that while they were logging into Sony’s infrastructure, the attackers slipped up and connected to Sony’s systems directly. Then, while exposed like this, they logged into the GOP Facebook page. This allowed both Sony and Facebook to record their IP address information.

This is the sort of evidence that solid attribution cases are built upon.

However, it’s still possible to knock a few holes into this. For example, without the accurate understanding of the vector, we don’t know if they started this, or if they took over where someone else left off. We also don’t know whether this whole thing was a deliberate decoy. Without knowing what he meant by “North Korean IPs,” it is also impossible to understand how solid this evidence is. For example, if the IPs come from a North Korean machine owned by the South Koreans, the NSA, and the Chinese, my skepticism remains. It’s hard to say that anything coming from a machine that’s been “hacked to pieces” by multiple parties can definitively be attributed to anyone.

So that’s where we are. In my eyes, the preponderance of evidence definitely suggests North Korean involvement or someone trying very hard to make it look like North Korean involvement. However, I remain far from convinced that the North Koreans started this or, indeed, that they played a significant role in this. If only the NSA or FBI would invite me to look over their SIGINT and the non public evidence collected in the case… 🙂

Response to the latest Sony comments by FBI Director Comey.

Here’s my responses to the latest comments on the Sony hack, as presented by Director James Comey at the International Conference of Cyber Security in New York this morning.

“The tools in the Sony attack bore striking similarities to a cyber attack the North Koreans conducted in March of last year against South Korean banks and media outlets.”

This is something they pointed out in their first press release.  See https://marcrogers.org/2014/12/21/why-i-still-dont-think-its-likely-that-north-korea-hacked-sony/

“We put our behavior analysis unit to work looking at the statements, the writings, and the diction of the people involved claiming to be the so-called Guardians of Peace in this attack and compared it with other attacks we know the North Koreans have done. And they say, easy for us, it is the same actors.”

OK, but what other attacks? Like some of the other evidence presented, this is interesting sounding, but without any of the actual science it’s not much more than hearsay. What are the other attacks they are comparing with? What are the points of similarity? How similar are those points?

“We brought in a red team and said what else might we be missing… and we end up in the same place.”

I don’t even know what this means. Maybe they are saying they brought in a team of professional penetration testers and asked them to use their experience as “hackers”? If so, it’s interesting but again it is not really evidence.

Update: I’m told that in FBI speak “red teaming” means the FBI brought in professional critics to debate competing hypotheses. “ended up in the same place” means that despite alternate hypotheses after reviewing the evidence the red team ended up with the same conclusion – i.e. that North Korea was behind the attacks.

I would be very interested in hearing what alternate hypotheses were presented and on the basis of what evidence they were subsequently eliminated.

“I know that some serious folks have suggested we have it wrong, I would suggest, I am not suggesting, they don’t have the facts I have — they don’t see what I see, but there are a couple things I have urged the intelligence community to declassify that I am going to tell you right now. The Guardians of Peace would send emails threatening Sony employees and would post online various statements explaining their work. In nearly every case they used proxy servers to disguise where they were coming from. But several times they got sloppy.

There’s several things here. First, I don’t think anyone would disagree with him when he says he has access to information that no one else has. I would hope that the FBI has access to a lot more information than me. However, what many of us are saying is that if you are going to accuse a foreign country of an egregious crime, and have the US respond in a punitive way to that country, the evidence should be clear, of a good standard, and handled in a transparent way. If the FBI is sitting on a smoking gun then they should tell us about it because so far all they have presented is flimsy, at best circumstantial, evidence.

Let’s look at the email claim:

First, they are saying that these guys, who so were careful to route themselves through multiple public proxies in order to hide their connections, got sloppy and connected directly. It’s a rookie mistake that every hacker dreads. Many of us “hackers” even set up our systems to make this sort of slip-up impossible. So, while its definitely plausible, it feels very unlikely for professional or state-sponsored hackers in my books. Hackers who take this much care when hiding their connections have usually developed a methodology based around using these kinds of connections to hide their origin. It becomes such common practice that it’s almost a reflex. Why? Because their freedom depends on it.

However, even if we take that to one side and accept that these emails came from North Korean IP addresses, what are those addresses? If they are addresses in the North Korean IP ranges then why don’t they share them? If they are North Korean servers, then say so! What about the possibility that this attacker who has shown ability and willingness to bounce their connections all over the world is simply bouncing their messages off of North Korean infrastructure?

Finally, how do they even know these emails came from the attackers? From what I saw, the messages with actual incriminating content were dumped to pastebin and not sent via email. Perhaps there are messages with incriminating content – and by this I mean links to things only the attackers had access to – which they haven’t shared with us? Because from where I am sitting, it’s highly possible that someone other than the attacker could have joined in the fun by sending threatening messages as GOP, as we have already seen happen once in this case.

It would be far more interesting to me if these IP addresses were from connections that resulted in data being moved in or out of the network or connections involved in the control of compromised systems. For example:

  • Connections that can be directly implicated in placing tools into the network or logging into Sony machines
  • Connections involved in directly ex-filtrating the terabytes of information that GOP stole from Sony out of the network.

Once you have thoroughly excluded the risk of these connections being bounced through a compromised “patsy” system or through a series of proxies.  Once you are able to demonstrate that they are direct, or that you can account for every single point or hop in the connection. At this point, it would be hard for the connections to be anything but as a result of the attackers themselves.  However, the risk that somehow the connection is being redirected or that you are being misdirected is VERY hard to eliminate.

This is what makes attribution is so hard.

“Several times, either because they forgot or they had a technical problem, they connected directly and we could see them. And we could see that the IP addresses that were being used to post and to send the emails were coming from IPs that were exclusively used by the North Koreans.”

I think I covered this above.  I agree that it is certainly possible for an attacker to make mistakes like this, but it seems very unlikely. It would be a rookie mistake for an accomplished hacker. However, people do make mistakes.

Equally, however, it could be a sign that different people are involved – one who is being extremely careful, and one possibly an opportunist who is sending lots of threatening messages. Perhaps the hacker and an opportunistic emailer taking advantage of the chaos?

Finally, what are the IP addresses, and how can you say “exclusively North Korean”? The only addresses that I can think of as exclusively North Korean are ones that belong to the North Korean IP ranges, and if that’s the case, why not say so? Those addressees are hardly a matter of national security. Also, how did they rule out the possibility that someone is faking those IP addresses, or bouncing off of them, or hijacking them?

“It was a mistake by them that we haven’t told you about before that was a very clear indication of who was doing this. They would shut it off very quickly once they realized the mistake, but not before we saw them and knew where it was coming from.”

“We have a range of other sources and methods that I will continue to protect because we think they are critical to the entire intelligence community’s ability to see future attacks and understand this attack better.”

So we are back to the SIGINT. I respect their need to keep matters of national security discreet. After all, disclosure can jeopardize the operation, the source, and any future use.  However, if SIGINT is the only really strong evidence they have then say so. Stop presenting the rest of this weak circumstantial evidence, passing it off as a smoking gun.

“I remain where I started not just with high confidence but VERY high confidence that the North Koreans perpetrated this attack.”

Likewise, while I agree there is some interesting information here, I see nothing that conclusively incriminates the North Koreans. Instead, if anything, this evidence just leaves me with even more questions.

“We are still looking to identify the vector – so how did they get into Sony. We see so far spear phishing coming into Sony as late as September this year. We’re still working on that, and when we figure that out we’ll do our best to give you the details on that. But that seems the likely vector of entry into Sony.”

Wow. That’s potentially a HUGE bombshell. Is the FBI really saying that they don’t know what the vector was, or are they just being coy? If they genuinely don’t know what the vector was then I have even more concerns. The vector is often the single most important piece of evidence you can uncover in a cyberattack like this.

I don’t mean to be blasé because really do I appreciate that running an investigation like this is a very difficult exercise with many moving pieces. However, for me, the vector is a HUGE factor in understanding what happened and ultimately in correctly attributing the criminals that perpetrated it.

I do also appreciate that it is easy commenting from the peanut gallery, and not so easy to be sitting in the hot seat, especially with the kinds of constraints that the FBI have.

Finally a few folks have asked me what I would like to see in order to be convinced that it really was North Korea.

That’s a tough question to answer. I would love to see a smoking gun but a smoking gun can mean so many things in the world of digital forensics. It’s also something that you so rarely get. Some of the things that would help, though, would be –

  • Evidence of direct symmetrical connections – i.e. either data connections or control connections that were not routed from a proxy. Not asymmetric connections such as an email moving from server to server.
  • Connections that were routed through proxies supported by logs from every single proxy involved.
  • Comprehensive forensic evidence from a “staging system” used by the attackers to control or deploy their attacks.
  • Evidence of highly specific code, which can then be tied to an individual or individuals.
  • Evidence of a highly specific vulnerability or 0day that can be traced.
  • Details (not just hints) about highly specific tools used by the attackers, which can be linked back to the attackers.
  • The attacker’s laptop with tools, code and logs intact.
  • A confession 😉