My HEARTBLEEDs for you

Posted in Valnurability on April 9, 2014 by keizer

imageedit_1_8145715273The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).


The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.


The fix from was not late to be released and specified: “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server. Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including 1.0.1f and 1.0.2-beta1.

The official bug was assgined under CVE-2014-0160. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE.

What is being leaked?

Encryption is used to protect secrets that may harm your privacy or security if they leak. In order to coordinate recovery from this bug you can classify the compromised secrets to four categories: 1) primary key material, 2) secondary key material and 3) protected content and 4) collateral.

What is leaked primary key material and how to recover?

These are the crown jewels, the encryption keys themselves. Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.

What is leaked secondary key material and how to recover?

These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leaks requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalided and considered compromised.

What is leaked protected content and how to recover?

This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.

What is leaked collateral and how to recover?

Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.

Are YOU affected by the bug?

Some operating system distributions that have shipped with potentially vulnerable OpenSSL version:

Debian Wheezy (stable), OpenSSL 1.0.1e-2+deb7u4
Ubuntu 12.04.4 LTS, OpenSSL 1.0.1-4ubuntu5.11
CentOS 6.5, OpenSSL 1.0.1e-15
Fedora 18, OpenSSL 1.0.1e-4
OpenBSD 5.3 (OpenSSL 1.0.1c 10 May 2012) and 5.4 (OpenSSL 1.0.1c 10 May 2012)
FreeBSD 10.0 – OpenSSL 1.0.1e 11 Feb 2013
NetBSD 5.0.2 (OpenSSL 1.0.1e)
OpenSUSE 12.2 (OpenSSL 1.0.1c)

You can also test your site with the Hearbleed test:

ScreenShot001(Click to Enlarge)


How can OpenSSL be fixed?

Even though the actual code fix may appear trivial, OpenSSL team is the expert in fixing it properly so latest fixed version 1.0.1g or newer should be used. If this is not possible software developers can recompile OpenSSL with the handshake removed from the code by compile time option -DOPENSSL_NO_HEARTBEATS.


For more informatio go to:


Resident XSS (Rootkits in your Web application)

Posted in Valnurability on January 11, 2014 by keizer

The new HTML5 was brought the internet community with a whole bunch of new features, to help us create a more dynamic and smooth client side, whether for the mobile apps or to a browser based client. Among its new feature you can find: CSS3, Drag&Drop, localStorage & sessionStorage, Streaming, Geolocation, Webworks and whatnot.

Wait! (imagine you hear the rewind sound): local & session Storage? does that mean that I can store data on the client? -Yes!
Well, it sounds like a great feature isn’t it? they all are… but are they safe? – A big fat No!

But lets concentrate in the session & local storage, shall we? What if someone store things for me? evil things… it means that my own browser will betray me and hold malicious data that could harm me! how does that work?

Lets say that the  site you are using stores some of your account details into your local & session storage, so later on it could read information from your client, sparing it to send requests to the server.

In this scenario, our website stores your account#, the SessionId, your username and some other string that will be displayed on the screen when you enter the website (sounds like… exactly!).

Our website, takes the data it previously entered to the storage and displays it when the page renders. This is how the page looks normally, along with the original information stored in the sessionStorage:

residentXSS_before(Click on the image to enlarge)


Notice that the message withing the speech bubble takes its information from the Session Storage, as can be seen in the above image, right below the page. You can access the page’s Resources tab using Chrome’s Developer tools (a.k.a Inspect Element). We can assume that the code behind it looks something like this:

var username = window.sessionStorage.getItem(“User”);
var speech = window.sessionStorage.getItem(“Speach-of-the-day”);
var action = window.sessionStorage.getItem(“Random-action”);

and calling them using:

<h1> <script>document.write(“Hello ” + username); </script> </h1>
<h3> <script>document.write(speech);</script> </h3>
<a href=”#”> <script>document.write(action);</script> </a>

Now, what if this website is vulnerable to a Reflected XSS? in that case the XSS would be active once, on the response that includes it. But what if! this website uses the client side storage to display information?

– If the attacker injected some malicious content to our sessionStorage and the website does not perform client-side encoding/ escaping (wait… client-side? yes!) then the XSS would take residence (thus, the name) inside our browser, just waiting to be called by the client.

Its also very simple to do so, all the attacker needs to know is the name of the key within the storage and then change it simply using: sessionStorage.setItem(key);

So, let go back to the example. The attacker managed to implement a code inside our sessionStorage using regular XSS. Then it will look like this:

residentXSS_after(Click on the image to enlarge)


Notice that the Speech-of-the-day key has changed, and it now contains a script. The next time (and everytime after that) the page refreshes, the XSS will be executed:

residentXSS_alert(Click on the image to enlarge)


What can we do about it? — as I mentioned earlier, before reading content from the client-storage escape it! so when called, even if someone planted a malicious script – it won’t be executed:

residentXSS_escaping(Click on the image to enlarge)


With all the cool features HTML5 has brought to us, a lot of new security issues has been introduced. An important part of them involve data stored in the client, or taken from it. Since the client should never be trusted, all the data in it, no matter if we stored it or not –  should be mitigated, and we should take any precautions possible when dealing with it.

WhatsApp Messages Eavesdropping

Posted in Valnurability on January 10, 2014 by keizer


Logging into WhatsApp on a new device currently works as follows:

  • The phone posts its phone number to a HTTPS URL to request an authentication code,
  • the phone receives an authentication code in the text message,
  • the authentication code is used, again over HTTPS, to obtain a password.

These passwords are quite long and never visible to the user, making them hard to steal from a phone.

Authentication Overview

With the password, the client can log in to a XMPP-like server. For this it uses the custom SASL mechanism WAUTH-1. To log in with the phone number 1234567890, the following happens (this is only an XML representation of the protocol)

  • The client sends:
    <auth xmlns=”urn:ietf:params:xml:ns:xmpp-sasl” user=”1234567890” mechanism=”WAUTH-1″ />
  • The server responds with a some challenge (here ABCDEFGHIJKLMNOP):
    <challenge xmlns=”urn:ietf:params:xml:ns:xmpp-sasl”> ABCDEFGHIJKLMNOP</challenge>
  • To respond to the challenge, the client generates a key using PBKDF2 with the user’s password, the challenge data as the salt and SHA1 as the hash function. It only uses 16 iterations of PBKDF2, which is a little low these days, but we know the password is quite long and random so this does not concern me greatly. 20 bytes from the PBKDF2 result are used as a key for RC4, which is used to encrypt and MAC 1234567890 || ABCDEFGHIJKLMNOP || UNIX timestamp:
    <response xmlns=”urn:ietf:params:xml:ns:xmpp-sasl”>XXXXXXXXXXXX</response>
  • From now on, every message is encrypted and MACed (using HMAC-SHA1) using this key.

Mistake #1: The same encryption key in both directions

How RC4 is supposed to work? RC4 is a PRNG that generates a stream of bytes, which are XORed with the plaintext that is to be encrypted. By XORing the ciphertext with the same stream, the plaintext is recovered.

So,  (A ^ X) ^ (B ^ X) = A ^ B

In other words: if we have two messages encrypted with the same RC4 key, we can cancel the key stream out!

As WhatsApp uses the same key for the incoming and the outgoing RC4 stream, we know that ciphertext byte i on the incoming stream XORed with ciphertext byte i on the outgoing stream will be equal to XORing plaintext byte i on the incoming stream with plaintext byte i of the outgoing stream. By XORing this with either of the plaintext bytes, we can uncover the other byte.

This does not directly reveal all bytes, but in many cases it will work: the first couple of messages exchanged are easy to predict from the information that is sent in plain. All messages still have a common structure, despite the binary encoding: for example, every stanza starts with 0xF8. Even if a byte is not known fully, sometimes it can be known that it must be alphanumeric or an integer in a specific range, which can give some information about the other byte.

Mistake #2: The same HMAC key in both directions

The purpose of a MAC is to authenticate messages. But a MAC by itself is not enough to detect all forms of tampering: an attacker could drop specific messages, swap them or even transmit them back to the sender. TLS counters this by including a sequence number in the plaintext of every message and by using a different key for the HMAC for messages from the server to the client and for messages from the client to the server. WhatsApp does not use such a sequence counter and it reuses the key used for RC4 for the HMAC.

When an attacker retransmits, swaps or drops messages the receiver can not notice that, except for the fact that the decryption of the message is unlikely to be a valid binary-XMPP message. However, by transmitting a message back to the sender at exactly the same place in the RC4 stream as it was originally transmitted will make it decrypt properly.


You should assume that anyone who is able to eavesdrop on your WhatsApp connection is capable of decrypting your messages, given enough effort. You should consider all your previous WhatsApp conversations compromised. There is nothing a WhatsApp user can do about this but except to stop using it until the developers can update it.

There are many pitfalls when developing a streaming encryption protocol. Considering they don’t know how to use a XOR correctly, maybe the WhatsApp developers should stop trying to do this themselves and accept the solution that has been reviewed, updated and fixed for more than 15 years, like TLS.

Android PoC

Decompiling this turned into a pile of garbarge. All strings are obfuscated so it’s very hard to determine which class is doing what. It appears to use some crypto API as references to certain elliptic curves and AES. This is likely Bouncy Castle, which doesn’t mean they actually use ECC or AES.

So on to the Android Emulator, where it turns out it is not following the authentication procedure as described in my previous post. The mechanism is still called WSAUTH-1, but the initial <auth> contains 45 bytes of data now:

<auth user=”1234567890″ xmlns=”urn:ietf:params:xml:ns:xmpp-sasl” mechanism=”WAUTH-1″> XXXXXXXXXXXX</auth>

After this message the server replies with an encrypted response (ABCDEFGHIJKLMNOP)

The first hint that this is still RC4 is that encrypted messages are of different lengths: a block cipher would require padding and would result in messages of exactly the block size.

45 bytes is also exactly the length of the data in the SASL   <response>. So lets try and repeat the same trick:

  • The plaintext of the XXXXXXXXXXXX block is still 1234567890 || nonce || UNIX time (1234567890 was the phone number, wich is still included in plain). We do not know what the nonce is in this case, but we do know 1234567890. So we could try to make a guess for the timestamp, but lets keep it simple.
  • Calculate (X[i] ^ 1[i]) ^ A[i] (ignoring the HMAC bytes) and the result is (in hex):
    F8 0E BE C3 FC 0A 31 33 38 31 32
  • As mentioned earlier, all stanzas in WhatsApp’s binary encoding start with 0xF8. The last 5 bytes are 13812in ASCII, which looks suspiciously much like the 't' attribute of the <success> message (a UNIX timestamp). The 0xBE refers to the string success in WhatsApp’s encoding.

This is very unlikely to be a coincidence, which proves that the latest version of the Android client is vulnerable.

But where did that RC4 key come from?

Apparently the server is able to generate the RC4 key using no nonce or key exchange with the client. The key is also different on each login, as otherwise the first couple of ciphertext bytes in the <auth> would always be the same.

We can suspect that the key only depends on the password, but new password is obtained during every login. The client makes an HTTPS request to WhatsApp before the fake-XMPP connection is made and yowsup-cli has a flag that will send your password to the server, which will give you a new one if the password was correct.


You should assume that anyone who is able to eavesdrop on your WhatsApp connection is capable of decrypting your messages, given enough effort. You should consider all your previous WhatsApp conversations compromised. There is nothing a WhatsApp user can do about this but except to stop using it until the developers can update it.

There are many pitfalls when developing a streaming encryption protocol. Considering they don’t know how to use a XOR correctly, maybe the WhatsApp developers should stop trying to do this themselves and accept the solution that has been reviewed, updated and fixed for more than 15 years, like TLS.

Thanks to : xnyhps’ blog

Google Chromecast – released yesterday, hacked today

Posted in Exploits with tags , , on July 29, 2013 by keizer


On Wednesday, July 24th Google launched the Chromecast. As soon as the source code hit GTV Hacker began their audit. Within a short period of time they had multiple items to look at for when their devices arrived. Then they received their Chromecasts the following day and were able to confirm that one of the bugs existed in the build Chromecast shipped with. From that point on they began building what you are now seeing as our public release package.

Exploit Package:

GTV Hacker Chromecast exploit package will modify the system to spawn a root shell on port 23. This will allow researchers to better investigate the environment as well as give developers a chance to build and test software on their Chromecasts. For the normal user this release will probably be of no use, for the rest of the community this is just the first step in opening up what has just been a mysterious stick up to this point. GTV Hacker hope that following this release the community will have the tools they need to improve on the shortfalls of this device and make better use of the hardware.

File:Chromecast dirty side1.jpgIs it really ChromeOS?

No, it’s not. GTV Hacker had a lot of internal discussion on this, and have concluded that it’s more Android than ChromeOS. To be specific, it’s actually a modified Google TV release, but with all of the Bionic / Dalvik stripped out and replaced with a single binary for Chromecast. Since the Marvell DE3005 SOC running this is a single core variant of the 88DE3100, most of the Google TV code was reused. So, although it’s not going to let you install an APK or anything, its origins: the bootloader, kernel, init scripts, binaries, are all from the Google TV.


How does the exploit work?

Lucky for GTV Hacker, Google was kind enough to GPL the bootloader source code for the device. So they could identify the exact flaw that allows us to boot the unsigned kernel. By holding down the single button, while powering the device, the Chromecast boots into USB boot mode. USB boot mode looks for a signed image at 0×1000 on the USB drive. When found, the image is passed to the internal crypto hardware to be verified, but after this process the return code is never checked! Therefore, they could execute any code at will.

ret = VerifyImage((unsigned int)k_buff, cpu_img_siz, (unsigned int)k_buff);

The example above shows the call made to verify the image, the value stored in ret is never actually verified to ensure that the call to “VerifyImage” succeeded. From that, they were able to execute our own kernel. Hilariously, this was harder to do than our initial analysis of exploitation suggested. This was due to the USB booted kernel needing extra modifications to allow us to modify /system as well as a few other tweaks. GTV Hacker then built a custom ramdisk which, when started, began the process of modifying the system by performing the following steps:

  • Mount the USB drive plugged in to the chromecast.
  • Erase the /system partition (mtd3).
  • Write the new custom system image.
  • Reboot.

Note: /system is squashfs as opposed to normally seen EXT4/YAFFS2.

The system image installed from our package is a copy of the original with a modified /bin/clear_crash_counter binary. This binary was modified to perform its original action as well as spawn a telnet server as root.

After the above process, the only modification to the device is done to spawn a root shell. No update mitigations are performed which means that theoretically, an update could be pushed at any moment patching our exploit. Even with that knowledge, having an internal look at the device is priceless and they hope that the community will be able to leverage this bug in time.

Downloads and instructions for exploitation can be found on their wiki at: GTVHacker Wiki: Google Chromecast

Rooting SIM cards

Posted in Encryption, Mobile, Network Security, Valnurability with tags , , on July 23, 2013 by keizer

SIM cards are the de facto trust anchor of mobile devices worldwide. The cards protect the mobile identity of subscribers, associate devices with phone numbers, and increasingly store payment credentials, for example in NFC-enabled phones with mobile wallets.

With over seven billion cards in active use, SIMs may well be the most widely used security token in the world. Through over-the-air (OTA) updates deployed via SMS, the cards are even extensible through custom Java software. While this extensibility is rarely used so far, its existence already poses a critical hacking risk.

Cracking SIM update keys. OTA commands, such as software updates, are cryptographically-secured SMS messages, which are delivered directly to the SIM. While the option exists to use state-of-the-art AES or the somewhat outdated 3DES algorithm for OTA, many (if not most) SIM cards still rely on the 70s-era DES cipher. DES keys were shown to be crackable within days using FPGA clusters, but they can also be recovered much faster by leveraging rainbow tables similar to those that made GSM’s A5/1 cipher breakable by anyone.

To derive a DES OTA key, an attacker starts by sending a binary SMS to a target device. The SIM does not execute the improperly signed OTA command, but does in many cases respond to the attacker with an error code carrying a cryptographic signature, once again sent over binary SMS. A rainbow table resolves this plaintext-signature tuple to a 56-bit DES key within two minutes on a standard computer.

Deploying SIM malware. The cracked DES key enables an attacker to send properly signed binary SMS, which download Java applets onto the SIM. Applets are allowed to send SMS, change voicemail numbers, and query the phone location, among many other predefined functions. These capabilities alone provide plenty of potential for abuse.

In principle, the Java virtual machine should assure that each Java applet only accesses the predefined interfaces. The Java sandbox implementations of at least two major SIM card vendors, however, are not secure: A Java applet can break out of its realm and access the rest of the card. This allows for remote cloning of possibly millions of SIM cards including their mobile identity (IMSI, Ki) as well as payment credentials stored on the card.

Defenses. The risk of remote SIM exploitation can be mitigated on three layers:

  1. Better SIM cards. Cards need to use state-of-art cryptography with sufficiently long keys, should not disclose signed plaintexts to attackers, and must implement secure Java virtual machines. While some cards already come close to this objective, the years needed to replace vulnerable legacy cards warrant supplementary defenses.
  2. Handset SMS firewall. One additional protection layer could be anchored in handsets: Each user should be allowed to decide which sources of binary SMS to trust and which others to discard. An SMS firewall on the phone would also address other abuse scenarios including “silent SMS.”
  3. In-network SMS filtering. Remote attackers rely on mobile networks to deliver binary SMS to and from victim phones. Such SMS should only be allowed from a few known sources, but most networks have not implemented such filtering yet. “Home routing” is furthermore needed to increase the protection coverage to customers when roaming. This would also provide long-requested protection from remote tracking.

Thanks to: Security Research Labs who published this!

This research will be presented at BlackHat on Jul 31st and at the OHM hacking camp on Aug 3rd 2013

Uncovering Android Master Key – 99% of devices are vulnerable!

Posted in Encryption, Malware, Mobile, Valnurability with tags , , , on July 4, 2013 by keizer

The Bluebox Security research team –  recently discovered a vulnerability in Android’s security model that allows a hacker to modify APK code without breaking an application’s cryptographic signature, to turn any legitimate application into a malicious Trojan, completely unnoticed by the app store, the phone, or the end user. The implications are huge! This vulnerability, around at least since the release of Android 1.6 (codename: “Donut” ), could affect any Android phone released in the last 4 years1 – or nearly 900 million devices2– and depending on the type of application, a hacker can exploit the vulnerability for anything from data theft to creation of a mobile botnet.

While the risk to the individual and the enterprise is great (a malicious app can access individual data, or gain entry into an enterprise), this risk is compounded when you consider applications developed by the device manufacturers (e.g. HTC, Samsung, Motorola, LG) or third-parties that work in cooperation with the device manufacturer (e.g. Cisco with AnyConnect VPN) – that are granted special elevated privileges within Android – specifically System UID access.

Installation of a Trojan application from the device manufacturer can grant the application full access to Android system and all applications (and their data) currently installed. The application then not only has the ability to read arbitrary application data on the device (email, SMS messages, documents, etc.), retrieve all stored account & service passwords, it can essentially take over the normal functioning of the phone and control any function thereof (make arbitrary phone calls, send arbitrary SMS messages, turn on the camera, and record calls). Finally, and most unsettling, is the potential for a hacker to take advantage of the always-on, always-connected, and always-moving (therefore hard-to-detect) nature of these “zombie” mobile devices to create a botnet.

How it works:

The vulnerability involves discrepancies in how Android applications are cryptographically verified & installed, allowing for APK code modification without breaking the cryptographic signature.

All Android applications contain cryptographic signatures, which Android uses to determine if the app is legitimate and to verify that the app hasn’t been tampered with or modified. This vulnerability makes it possible to change an application’s code without affecting the cryptographic signature of the application – essentially allowing a malicious author to trick Android into believing the app is unchanged even if it has been.

Details of Android security bug 8219321 were responsibly disclosed through Bluebox Security’s close relationship with Google in February 2013. It’s up to device manufacturers to produce and release firmware updates for mobile devices (and furthermore for users to install these updates). The availability of these updates will widely vary depending upon the manufacturer and model in question.

The screenshot below demonstrates that Bluebox Security has been able to modify an Android device manufacturer’s application to the level that we now have access to any (and all) permissions on the device. In this case, we have modified the system-level software information about this device to include the name “Bluebox” in the Baseband Version string (a value normally controlled & configured by the system firmware).

Screenshot of HTC Phone After Exploit



  • Device owners should be extra cautious in identifying the publisher of the app they want to download.
  • Enterprises with BYOD implementations should use this news to prompt all users to update their devices, and to highlight the importance of keeping their devices updated.
  • IT should see this vulnerability as another driver to move beyond just device management to focus on deep device integrity checking and securing corporate data.

Thanks Bluebox Security

Skype? Microsoft reads everything you write!

Posted in Application, Encryption, Network Security, Valnurability with tags , , on May 23, 2013 by keizer

Thanks goes to The-H, for their original post @

Anyone who uses Skype has consented to the company reading everything they write. The H’s associates in Germany at heise Security have now discovered that the Microsoft subsidiary does in fact make use of this privilege in practice. Shortly after sending HTTPS URLs over the instant messaging service, those URLs receive an unannounced visit from Microsoft HQ in Redmond.

A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log: - - [30/Apr/2013:19:28:32 +0200]
"HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"

the source...
The access is coming from systems which clearly belong to Microsoft.      source: utrace

Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.

In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:

“Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links.”

A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites. This explanation does not appear to fit the facts, however. Spam and phishing sites are not usually found on HTTPS pages. By contrast, Skype leaves the more commonly affected HTTP URLs, containing no information on ownership, untouched. Skype also sends head requests which merely fetches administrative information relating to the server. To check a site for spam or phishing, Skype would need to examine its content.

Back in January, civil rights groups sent an open letter to Microsoft questioning the security of Skype communication since the takeover. The groups behind the letter, which included the Electronic Frontier Foundation and Reporters without Borders expressed concern that the restructuring resulting from the takeover meant that Skype would have to comply with US laws on eavesdropping and would therefore have to permit government agencies and secret services to access Skype communications.

In summary, The H and heise Security believe that, having consented to Microsoft using all data transmitted over the service pretty much however it likes, all Skype users should assume that this will actually happen and that the company is not going to reveal what exactly it gets up to with this data.


Get every new post delivered to your Inbox.