It’s just NTP, so what is the big deal??

NTP is the grand old lady of time synchronisation protocols. Designed by David L. Mills in 1985, it has been a reliable vehicle to provide clients in a network with accurate time down to a few milliseconds.
Clipboard01
In a nutshell NTP uses a tree-like hierarchical system of time sources, with one or more reference time sources (i.e. an atomic clock) on the top. Each hierarchy level is called a stratum and provides accurate time (received from the stratum above) to the layer below via sending / receiving time synchronization packets. In due process network propagation delays are compensated, and the receiving client is able to synchronise its local clock to the parent stratum server.

However, NTP like so many other network protocols has not been designed with “gold-plated” security features in mind. While it supports the Autokey Security Architecture (see RFC 5906 Network Time Protocol Version 4: Autokey Specification), which provides message (aka timestamp) authentication via public asymmetric cryptography, Autokey has some fundamental design flaws.
This means that – even with Autokey enabled – an attacker on a time synchronisation network can attempt to disrupt a time aware service by sending NTP packets with incorrect time information.

But what is the big deal, let’s say outside the financial industry that already ring fence their NTP infrastructure for obvious reasons? Here are a few examples:

  • In a recent (November 2015) publication (Attacking the Network Time Protocol) security researchers from the University of Boston showed how a compromised NTP infrastructure can disable, weaken, or hamper a whole range of essential network protocols, including DNSSEC, Kerberos and HTTPS.
    For example, a client “running in the past” would accept a backdated and either revoked or weak server certificate and setup a cryptographically weak HTTPS connection, or trust a malicious server that issued the certificate.

  • Certain consumer devices (e.g. tablets or smartphones) can be remotely and persistently bricked by providing them with incorrect NTP time information, the so called “1/1/1970 bug”.
    A very impressive video can be found here.

NTP is a mature and widely used protocol that will be with us for many more years to come. But as long as there is no robust Autokey successor ratified and globally rolled out, time synchronisation vulnerabilities will exist.

“goto fail”: Apple’s SSL Bug and an unanswered Question

In February 2014 a bug in Apple’s SSL implementation was patched. It affected Apple mobile products that run iOS 6 and iOS 7 as well as desktop products that run OS X 10.9 (Maverick).
iOS 6 was launched in September 2012, so this problem was out in the wild for quite a while.

The flaw compromised SSL-based HTTPS connections that use specifically Diffie-Hellman (D-H) key exchange during the initial handshake phase.
In D-H both client and server exchange public keys, which are used by each side to calculate a common session key. In order to deter a man-in-the-middle attack the server digitally signs its public key before sending it to the client. The client in turn validates the signature before accepting the public key.
And exactly this did not happen… the client simply accepted the public key regardless.

Figure 1: The SSL Bug (Courtesy of http://opensource.apple.com)

Figure 1: The SSL Bug (Courtesy of http://opensource.apple.com)

In a potential attack scenario a vulnerable client and a man-in the middle (MITM) share an open internet connection, for example via a public access point. The client initiates a secure (HTTPS) connection to some remote server. The MITM intercepts the signed public key of the server during the handshake and sends his own public key (complemented with some dummy signature) to the client, therefore creating two separate “secure” connections to both ends. This allows him to decode (and encode) all messages coming from both sides.

In all fairness SSL and TLS implementations are very complicated pieces of code – the affected source file has about 2000 lines of code – and are probably only fully understood by a handful of people. And of course coding errors happen… even to large and well-established companies that have mature development, testing and QA frameworks / organisational structures.

However, the bug manifests itself in a duplicated line of code (“goto fail;” in Figure 1), which translates into the following code structure:

If (condition)
                goto fail
else
                goto fail

<more_code>
fail:

The code behind <more_code> (e.g. the outstanding validation of the server’s signature) can never be executed, it is unreachable code.

And this is a blatantly obvious bug. Any software professional with some C programming skills should have seen this in a code review.
And you don’t even need to know anything about SSL or the functionality of the code to realise that there is something fundamentally wrong here.
Even a modern C compiler will give a warning, if it detects unreachable code.
But still, this problem was never discovered pre-deployment.

And I wonder why. How could this happen to such a crucial piece of code?

Business and Technology Partnership with PrimeKey

OSNA are pleased to announce a Business and Technology Partnership with PrimeKey (http://www.primekey.se), an innovative and proven PKI provider, based in Stockholm, Sweden.

PrimeKey  have a very strong track record in PKI with deployments across the EU in a range of sectors including, Government services, Banking, and Asset Management. OSNA will build on the core competence and proven capabilities of PrimeKey’s PKI solutions to meet the specific needs  of M2M communication in the domains of Critical Infrastructure, Medical Devices, and Control Systems/SCADA.

Perfect Forward Secrecy in M2M Communication

pfs3

Figure 1: Node F eavesdrops on secure communication between nodes P and S

Perfect Forward Secrecy (PFS) is a well understood property of cryptographic protocols. It ensures that a session key SK derived from a long-term public and private key pair (PuK and PrK, the former typically embedded in a digital certificate) will not be compromised if the private key PrK is recovered in the future.

For example, a widely used client/server key agreement mechanism implemented in TLS requires the client node (P in Figure 1) to generate a random pre-master secret, which is encrypted using the server’s public key PuK, before being sent to the server (S in Figure 1). This allows the server to recover the pre-master secret and subsequently both sides to calculate the same session key, which is then used for the (symmetric) encryption of network traffic. However, if a third party (F in Figure 1) logs the entire data communication between both sides and is later able to recover the server’s private key PrK (via brute force, cryptanalysis, court order etc.), the pre-master secret, the session key and consequently the entire encrypted data communication can be retrospectively recovered. In other words perfect forward secrecy is not provided.

PFS can be achieved via a different key exchange protocol, e.g. Diffie–Hellman (DH) or Elliptic curve Diffie–Hellman (ECDH). Both methods do not require the exchange of a pre-master secret to agree on a session key.
But even though these key exchange mechanisms have already been adopted in many major cryptographic protocols including TLS, SSH and IPSec, they are not widely used in secure Internet data communication yet.

There are 2 reasons for this:

  1. PFS comes at a price, as it extends the session key negotiation phase of the cryptographic protocol.
  2. The recovery of a private key PrK by a third party is seen as a rather hypothetical scenario.

However, recent revelations about the widespread capture and storage of encrypted network communication (for later cryptanalysis) by some government agencies brings PFS back into the spotlight. And while the recovery of private key material might still not be feasible yet, concerned users and organisations want to see PFS adopted in their secure data communication.

What are the implications for M2M communication?

PFS should be the default for secure M2M communication, regardless of the type of data being transmitted and regardless of how feasible (or not) the recovery of private key material is.
DH and ECDH are computationally expensive, which is a particular concern for resource constraint embedded systems, but PFS is a fundamental building block of trust. Security in distributed embedded systems can potentially be a leaky bucket (if not properly implemented), so it is essential to plug all potential holes upfront.

Did I already mention that the OSNA authentication protocol has PFS by default?

The Internet of unsecured Things Part 3

Recently one of my students started mapping the Irish IP address space to get an overview of what kind of internet-enabled industry equipment is out there.

Typical HVAC Web Interface

Typical HVAC Web Interface

The search was based on various data repositories (he did not use nmap) and showed some surprising results, which could fall under the headline “country-specific type / model variations of industrial controllers”.

However, it took not long to stumble over an online HVAC (heating, ventilation and air conditioning) system with a wide-open web interface. Its GUI showed the HVAC’s approximate location (a street on the south side of Dublin) and of course any visitor (hostile or not) would have the ability to manipulate its settings, including boiler temperature, heating pump output, alarm settings etc.

A quick look at Google Maps revealed a number of potential locations of the HVAC, including a church, various commercial buildings and an embassy.

The HVAC is operational since 2009, so it is unlikely to disappear overnight and while a manipulation of system settings might not cause any problem now, it could cause havoc in winter time (I mentioned it before: frozen pipes are no fun).

But where to go from here? Who should be notified? Who is responsible?

Justin Clarke from Cylance raised similar questions during a recent security conference in London, where he referred to an internet backdoor in some UK-based hospital building management systems that was the result of a known firmware bug. In his case study the manufacturer would have been able (in theory) to warn its customers about the issue. The HVAC system in Dublin on the other hand was simply poorly configured, so it is not a manufacturer issue.

This is like watching a car speeding towards a cliff and not being able to warn the driver.

The Internet of unsecured Things Part 2

A port server configuration screen

A port server configuration screen

Set a strong password and non-default username!

This mantra is so corny that I should not mention it at all. However, there is a new twist to it:

Last week HD Morgan (the creator of Metasploit) presented his findings about vulnerabilities of serial port servers at the InfoSec Southwest 2013 conference.

Serial port or terminal servers provide TCP/IP connectivity for devices with serial (i.e. RS232, RS485 etc.) or sometimes non-serial (i.e. GPIO) interfaces.  They are widely used to provide remote and out-of-band access to non-networked equipment, for example in industrial automation and environmental monitoring.

Configured properly a modern port server allows the setup of secure point-to-point connections (for example via SSH / SSL) between one or more (serial) interfaces and the network-connected remote hosts.

Recently I used such a device (e.g. a 4-port server from Digi International) to connect some legacy RS232 devices to a LAN.  However, it took some time to explore and evaluate all the available configuration options and security settings the port server offers.

This complexity might explain some of HD Morgan’s findings and recommendations (bar the one above).

In his research he pentested a large number of actually deployed port server systems he previously found via Shodan and the Internet Census 2012. Many of them showed significant weaknesses in their configuration and security settings, effectively allowing him to access the serial devices behind the port servers.

Based on his findings he made a number of recommendations regarding the proper configuration of port servers. Some of them are trivial or straight-forward, i.e.

  • Set a strong password and non-default username (sic!).
  • Only use encrypted management services (SSL/SSH).
  • Enable remote event logging.
  • Audit uploaded scripts.
  • Require authentication to access serial ports.

2 of them however caught my eye – I must admit I never thought of them:

  • Scan for and disable ADDP (Advanced Device Discovery Protocol) in order to make device discovery harder.
  • Enable inactivity timeouts for serial consoles to avoid session highjacking.

HD Morgan concluded his report as follow: “The sheer number of critical, bizarre, and just plain scary devices connected to the internet through serial port servers are an indication of just how dangerous the internet has become.

I think there is nothing else to add.

Home sweet Home

Image courtesy of Der Spiegel

Domestic cyber attacks are still something very abstract. First of all Home Area Networks (HAN) and network / internet-enabled appliances are still at their infancy and not widely deployed yet. And even if you could break into and mess around with such an installation, what damage could you do… switch on the patio light?

A small glimpse of what the potential risks are showed a recently discovered security hole of a combined heat and power unit offered by Vaillant, one of Europe’s leading heating technology manufacturer.

The ecoPOWER 1.0 is a domestic small-scale system that burns natural gas to provide heating and power for family homes. To date around 800 systems of this type have been installed.
The system can be remote-controlled and remotely serviced via its internet connection.
A web interface allows home owners to control heating settings, while service technicians can use it to remotely service / diagnose the appliance.

However, the German trade journal BHKW-Infothek recently published a report about a security hole in this web interface that allows the recovery of plain text passwords of customers, service technicians and even developers.

Using these credentials attackers can mess around with the system, and for example shut down the entire appliance (frozen pipes in winter time are no fun) or increase the temperature above safe margins, which can cause structural damage to houses. The developer credential allows attackers to go even deeper and access the internal CAN bus directly.

A detailed video (in German) can be found here.

The problem is exacerbated by the fact that all appliances are registered with Vaillant’s own DynDNS service, so devices can be found via trial and error.

In recent days Vaillant has sent all its customers a warning, recommending they manually disconnect the appliances from the network. According to Der Spiegel Vaillant plans to retrofit all ecoPOWER 1.0 systems with VPN boxes.

The Full Monty

The Full Monty (Film) – Image courtesy of www.fmvmagazine.com

I love this film. Six unemployed steelworkers from Sheffield decide to make a few bob by performing a strip show, doing the “Full Monty” (strip all the way).

My personal message from this film is: if you do something, do it right, from beginning to end, regardless of how cumbersome, difficult or (in the guys’ case) embarrassing it is – do it properly or leave it. It pays (sometimes literally) off, you are proud of the achievement and you might even get some credit.

“The Full Monty”, that’s what came into my mind, when I read Eric Romang’s blog about a problem he discovered recently with signed Java apps.

His findings in a nutshell: Java JARs can be digitally signed in order to allow them to operate outside their sandbox, effectively allowing them to download / install additional software on the target computer. The digital fingerprint is a token of trust, which is issued by a (legitimate) company using its own digital certificate. In other words signed apps are deemed to be secure and trustworthy.

Java control panel

Java control panel

That’s the fundamental concept of trust in PKIs.

However, one of the problems Eric Romang discovered was that the Java runtime environment verifies the fingerprint of a JAR file, but does not check by default the signer’s certificate for revocation. This option needs to be manually enabled (see the Java control panel on the right).

This is a major faux pas in the Java environment and resulted in a situation, where a “trusted” but malicious app (that was signed by a revoked certificate) installed an exploit kit on host computers.

Here in OSNA we are working on PKI solutions for M2M communication in isolated networks, basically configurations where OCSP (Online Certificate Status Protocol) is not available and the management and distribution of CRLs (Certificate Revocation List) is tricky.

However, despite these constraints we are working on the Full Monty, as a PKI without revocation is useless and gives a wrong sense of security to users.

In the above film, Horse, one of the main characters, gasped “Nobody said anything to me about the Full Monty!” when the group’s intentions were made public. Well, we in OSNA have been told.

The Internet of unsecured Things

 

Carna map

Courtesy of http://www.bitbucket.org/

Recently the Carna botnet found more than 1 million open (e.g. unprotected) embedded devices on the internet. Many of them were based on Linux and allowed login to BusyBox with empty or default credentials (e.g. root:root, admin:admin and both without passwords).

These unprotected devices included consumer routers, set-top boxes, IPSec and BGP routers, x86 equipment with crypto accelerator cards, industrial control systems, physical door security systems and Cisco/Juniper equipment.

This botnet did not cause any damage, but with the Carna report being widely published it is only a matter of time, before other malicious botnets target specifically such open devices – in fact, the (anonymous) author of Carna found the Aidra bot already on one device he used.

And of course 1.2 million devices can’t and won’t be patched. It is an eyesore that won’t go away.

But the thing that bothers me most is the potential threat posed by the “real” Internet of Things that will (or is supposed) to come – aka your internet-enabled anything and everything.

How are we supposed to ride a motorbike, if we can’t even handle a bicycle?

Quo Vadis Windows XP?

Image courtesy of www.microsoft.com

Welcome to my very first blog on this website, I am going to do something a blogger probably never should do in his first post, unless of course (s)he wants to deliberately damage his reputation: make a prediction.

Before I get cold feet, here it is: ICS and critical infrastructure are in for another hit in the summer of 2014.

Ok, such systems already have a very hard time security-wise; there is Stuxnet, Flame, plus whatever other undiscovered malware is out there. We find an increasing number of reported vulnerabilities, leading senior advisors like the US Defence Secretary Leon Panetta, to talk about the risk of potential imminent attacks to critical infrastructure, causing a “digital Pearl Harbor”.

But there is a new ingredient in the pipeline to add to the mix: On April 8th 2014 Microsoft will end all extended support for Windows XP. No more extended support means no more hotfixes, security patches and service packs – or, as Gerald Himmelein from the German c’t magazine[i]puts it: “A flock of sheep without a guardian dog is a feast for a wolf”. Replace the animal references with; XP, Microsoft and black hat hacker, and you see where he is coming from.

According to StatCounter[ii], XP still has (by March 2013) a market share of 22% – almost 3 times the market share of MacOSX.  Literature and recent site visits make me believe that its market share in control systems is significantly higher, maybe as high as 40% to 50% (even though not all of these installations are necessarily easy targets for cyber-attacks).

If I was to discover and deploy a zero-day XP exploit I would wait until April 2014 maximising its effectiveness. Alas, there is another important factor: the increasing trade of zero-day exploits. In other words if I was to discover and trade a zero-day XP exploit I would wait until April 2014 before I sell it, maximising its value.

Putting all this together means that there is potentially a wave of cyber attacks based on XP zero-days exploits looming around the corner. Control systems might not be the primary target of such attacks, but there is plenty of opportunity for collateral damage.

Time will tell how big this wave will be, and maybe I should have listened to Robert Storm Petersen (aka Storm P) the Danish humorist who once said: “It’s hard to make predictions – especially about the future.”, but as the old saying goes ‘fail to prepare and you prepare to fail’. How are you preparing for the drop of XP support?