98 things Facebook knows about you

SophosLabs has just released a report on a new way that crooks are distributing a strain of malware that makes money by “borrowing” your computer to mine a new sort of cryptocurrency.

A few years ago, cryptocoin mining was a popular pastime. Cryptocurrencies work by making participants perform huge numbers of cryptographic calculations until they get lucky and “mine” a coin. The more computers you could call upon, the better your chance of paydirt.

So, numerous threats appeared that used infected computers to mine cryptocurrencies at the expense of the victim. Mining coins can burn through a lot of electricity to power the computers in use, so infecting someone else’s computer provided the attacker with free CPU resources from each infected system, which would deliver any rewards from the mining operations into the attacker’s wallet.

It was an obvious gambit for the crooks, but after a while the average PC was no longer enough to mine a cryptocurrency like Bitcoin, because the Bitcoin system deliberately increases the difficulty of mining over time, to prevent the supply of Bitcoins from expanding indefinitely.

But newer cryptocurrencies offer legitimate participants to get in “on the ground floor,” as it were, making them a viable target once again for cryptomining crooks.

This new paper by Attila Marosi, Senior Threat Researcher at Sophos, investigates the Mal/Miner-C malware, which criminals are using to mine the cryptocurrency Monero.

In this paper, Marosi dives into how Mal/Miner-C quietly infects victims’ computers and communicates with host servers to run mining operations covertly in the background. Alone, one computer may not make a big impact on cryptocurrency mining, but the criminals aim to infect as many computers as possible with their malware (which has worm-like self-replicating properties) so they can reap the cumulative financial reward from hundreds of thousands of infected computers.

During the course of his research, Marosi found that a specific kind of Seagate product, the Seagate Central Network Attached Storage (NAS), turned up surprisingly commonly as a distribution server for Mal/Miner-C malware, even though the malware itself can’t run on a Seagate Central device.

Marosi decided to dig further, and scanned the globe looking for Seagate Central. More than 7,000 of the servers he found had inadvertently been connected to the internet so that literally anyone in the world could write to them. Of those, more than 70% had already been co-opted by the crooks into what was effectively a free content delivery network for their malware.

By researching Mal/Miner-C, this paper also specifically explores the criminals’ mining activities and how much money this racket is potentially worth to them.

Download this new technical paper today to learn about Mal/Miner-C, how it is used to mine cryptocurrencies, and how you can help to stop the crooks.

[contentblock id=92 img=gcb.png]

[contentblock id=74 img=gcb.png]

If half a billion passwords dragged out of Yahoo isn’t enough to convince us that we need more than passwords to secure our online stuff, perhaps a dancing banana will do the trick.

 

That animation comes out of a new effort to get us to use stronger authentication.

The campaign, called Lock Down Your Login, is a result of a call from the White House in February, when President Obama asked Americans to please use two-factor authentication (2FA).

For the Lock Down Your Login campaign, the White House teamed up with the National Cyber Security Alliance and companies such as Mozilla, Twitter, Google, Visa, Mastercard and Wells Fargo.

The goal is to educate people on how to set up strong authentication on all their online accounts, be they social media, email or banking accounts.

According to a National Cyber Security Alliance (NCSA) survey from July, 72% of Americans think their accounts are secure with only usernames and passwords.

That’s clearly wrong: we hear about new password breaches all the time. Recently discovered breaches, besides Yahoo, include Tumblr (65 million user email addresses and passwords), 164 million LinkedIn passwords, and 427 million passwords from MySpace.

Michael Kaiser, executive director of the NCSA, told CNET that the Yahoo breach was particularly concerning, given that email accounts often contain “crown jewels,” such as passwords to our other accounts, along with a wealth of personal information about us.

That personal information is gold to identity thieves. According to the NCSA, identity fraud hits a new victim every 2 seconds.

Clearly, passwords alone aren’t cutting it. From the campaign’s site:

Your usernames and passwords are not enough to keep your accounts secure. You have enough to worry about, so what can you do about it?

What you can do about it is use strong authentication – what’s also called multifactor authentication, 2FA or two-step verification (2SV) – to make it that much harder for somebody to get into your accounts if they manage to steal or guess your password.

2FA works by requiring that you prove that you’re you by using two different ways to authenticate before you can log in or use a service.

That often means using not just a password, but also something like a one-time code generated by your phone or another device, or perhaps a fingerprint, or…

But wait! Why clunk it up with boring explanations? Instead, let’s turn to the dancing banana.

Everybody, sing!

Use your fingerprint, your face or a code

At home or work or on the road

Two-steps is safer than one (or three! or four!)

And keeping data safe is so much fun!

Chorus:

Authenticate, (strong) authenticate!

Make your logins extra safe

Protect your identity from tragic fate,

Authenticate, strong authenticate!

Bear in mind that receiving text messages with a one-time code may be a great way to secure your accounts, but it’s not infallible. The authentication can be foiled if somebody steals or finds the phone, or the SMS may be hijacked by a VoIP service.

We saw Black Lives Matter activist and politician DeRay Mckesson fall victim to a Twitter hijacking in June – an account takeover that happened in spite of Mckesson using 2FA.

In July, the US National Institute for Standards and Technology put out draft guidelines stating that SMS isn’t strong enough for authentication purposes and will soon be banned.

Still, SMS-based 2FA is better than just using a password and user name, Kaiser told CNET, referencing the organization’s advice following the Yahoo breach:

Our response to the Yahoo hack was pretty simple. Go turn it on.

[contentblock id=92 img=gcb.png]

[contentblock id=72 img=gcb.png]

mds-week-2

 

 

 

 

 

 

 

 

 

National Cybersecurity Awareness Month: Week Two

From the Break Room to the Boardroom: Creating a Culture of Cybersecurity in the Workplace

Ours is an age where technology has infiltrated virtually every facet of our lives. As a result of this ongoing seismic shift in the way we gather information and communicate with one another, the manner in which we secure our digital lives must adapt to the threats around us.

In the not too distant past, the technical processes of technology were relegated to a handful of IT support staffers who worked their magic on our equipment then returned to their often mysterious home within the IT department. Thus, a dichotomy developed between those who kept our networks and end points operating at peak performance, and those who used these technologies to carry out their work related tasks. However, in an age where cyber-attacks are increasing exponentially in both number and complexity this division only invites difficulty as organizations defend themselves from data breaches.

By definition, the influence of every culture is measured by the breadth and depth of its reach among those who make up its population. Thus, workplace cultures must be evaluated by the manner in which their values and practices permeate the workforce. It stands to reason that a culture, even one focused on cybersecurity, cannot exist within an organization where resistance to wide scale policy adoption is pronounced.

To sum this up, with the prolific and targeted nature of today’s cybersecurity attacks, a concentrated team approach is required to mitigate the threats businesses face. As a result, an effective cyber defense posture will never become engrained within a company’s culture when there is a low rate of adoption among employees, when executive management fails to lead by example, and when best practices are not regularly communicated. To counteract these pitfalls to a broad culture of cyber awareness, businesses should enact these three action items:

  1. Communicate: When a business is intent on strengthening its cyber resilience, the IT department cannot go at it alone. Effective defenses require the ongoing communication of your firm’s cyber priorities. Employees need regular reminders regarding basic principles and policies, such as password management and a clear understanding that the boundaries of our modern workplace often follow us home. Thus, these threats and simple solutions should be communicated with regularity.
  2. Educate: Cultures don’t grow by accident and companies never drift any place worth going. These points are even true within the realm of information security. Employees need to know the how and why of corporate cybersecurity and its importance to company assets and their personally identifiable information.
  3. Cultivate: True cultural evolution calls for the cultivation of its priorities from the top down. Executives who noticeably practicing cyber policies will have a greater impact on the issue than those who merely share edicts from the C-Suite. In our age of phishing and ransomware, the CEO is just a vulnerable and the freshly minted intern. Through cultivation, a culture can be developed.

 

[contentblock id=74 img=gcb.png]

Apple may be working on anti-theft technology to protect iPhones that would covertly snap a photo of (what the device assumes is potentially) the thief, capture their fingerprint, shoot some video and/or record audio.

The company has filed a patent application, published on Thursday, that describes the proposed system.

The application says that a “trigger condition” would result in the capture of the biometrics data: say, if the device were to detect potentially unauthorized use, including fiddling with security.

As it is, there are third-party apps that automatically take photos of people who try to unlock our devices, but they only work on jailbroken iOS devices.

One example: back in 2012, a woman tried to unlock a stolen iPhone and unwittingly took her own photo.

An app on the phone called iGotYa then automatically sent the photo to the owner, who called the police.

Did the woman who tried to unlock the phone actually steal it, or did she unwittingly buy a stolen phone from the real thief?

That second scenario is what happened when an Australian woman lost her phone while on vacation at the beach last year.

Its new owner accidentally posted a selfie onto her Facebook page. She assumed he was the thief, but after labeling him as such, he reached out to her to say he’d bought it from a third party and had no idea it was stolen. He then returned it to her.

In other words, just because your iPhone snaps a photo or captures fingerprints doesn’t mean that its target is a crook, obviously.

If Apple does develop its biometrics-capturing technology, it’s possible that a lot of iPhone owners will wind up with a vast collection of selfies depicting their happy, drooly toddlers, and Apple will wind up storing a whole heck of a lot of innocent people’s biometrics.

The patent application describes collecting biometrics that may include “one or more fingerprints, one or more images of a current user of the computing device, video of the current user, audio of the environment of the computing device, forensic interface use information, and so on.”

The computing device may then provide the stored biometric information for identification of one or more unauthorized users.

Nick Statt, writing for The Verge, pointed out a few technical hurdles.

One such is that currently, Apple’s TouchID fingerprint-technology requires users to hold down their finger numerous times in a variety of different angles to accurately capture the print.

How likely is it that a thief would be so obliging as to inadvertently press their thumb on the home button, multiple times?

Perhaps Apple will refine TouchID so that it can capture fingerprints more efficiently, Statt suggests.

Using the forward-facing camera to snap photos, videos and/or audio would be a more likely scenario. In fact, iGotYa and similar apps go this route, capturing images when they detect that somebody’s fiddling with security settings.

Underlying all of these possibilities is the notion that an Apple device could be turned into a covert surveillance tool.

From battling the Feds over unlocking criminals’ devices to declaring that “we’re not like the others” when it comes to profiteering off of people’s privacy, Apple has worked overtime to position itself as a privacy champion.

Could the company potentially undermine all that work by turning devices into spying tools?

As it is, phone owners in the US only recently got the option to disable tracking and anti-theft tools if they so desire.

That was a win for those who live in the US and don’t like the idea that their phone’s being tracked.

If you want to be a smarter smartphone user, check out our 10 tips for securing your smartphone.

Also, check out Naked Security writer Paul Ducklin’s step-by-step guide to improving privacy and security on your iPhone, Android or Windows Phone.

[contentblock id=92 img=gcb.png]

[contentblock id=74 img=gcb.png]

 

Nearly two and a half years after Facebook acquired WhatsApp, and despite Whatsapp CEO Jan Koum saying at the time of the acquisition that user privacy wouldn’t suffer, the services are about to get a little bit friendlier with their data sharing.

WhatsApp’s new privacy policy gives it permission to share data, including your phone number, with Facebook “to coordinate more and improve experiences across our services and those of Facebook and the Facebook family”. In an FAQ, WhatsApp says it is doing this to:

  • More accurately count unique users
  • Better fight spam and abuse
  • Show better friend suggestions and more relevant ads to you on Facebook.

The messaging app explained the reasons for the changes in a blog post. It begins by highlighting its plans to test ways for people to communicate with businesses:

Whether it’s hearing from your bank about a potentially fraudulent transaction, or getting notified by an airline about a delayed flight, many of us get this information elsewhere, including in text messages and phone calls. We want to test these features in the next several months.

It also makes some stark promises in the blog post that it won’t…

…post or share your WhatsApp number with others, including on Facebook, and we still won’t sell, share, or give your phone number to advertisers.

Note the ‘on Facebook’ and not ‘Facebook’ itself.

Facebook won’t, however, be able to see any of your messages, photos or account information.

How to opt out

You can choose not to share your account information with Facebook for targeting purposes. There are two ways to do this:

1. On WhatsApp, don’t click Agree when it asks you to confirm you are happy with the change of terms. Instead, click to read more. You should then see a check box or control button at the bottom of the screen which says “Share my WhatsApp account information with Facebook to improve my Facebook ads and product experiences…”. Uncheck this.

whatsapp agree

2. If you have already agreed to the updated terms, you can go to to Settings > Account > Share my account info in the app. Then uncheck the box or toggle the control. But quick, WhatsApp says you only have 30 days to make this choice after agreeing to the new terms.

whatsapp2

Sadly, it’s not a silver bullet

Even if you opt out of the ad targeting part, WhatsApp says that Facebook will still be sent your data “for other purposes such as improving infrastructure and delivery systems, understanding how our services or theirs are used, securing systems, and fighting spam, abuse, or infringement activities.”

So it seems you can’t entirely opt out. Unless you stop using WhatsApp of course.

Follow @NakedSecurity

Image of WhatsApp and Facebook courtesy of quka / Shutterstock.com.

 

by Sophos

 

[contentblock id=74 img=gcb.png]

It’s October, and that means it’s Cybersecurity Awareness Month (CSAM).

In the USA, it’s not merely CSAM, it’s officially National Cybersecurity Awareness Month, an awareness project aimed at ensuring that everyone has “the resources they need to stay safer and more secure online.”

In 2016, as in previous years, the overall message of NCSAM is a simple one to remember:

STOP. THINK. CONNECT.

That’s actually excellent advice for any online activity, whether that’s uploading snapshots, signing up for a new service, clicking through to a website, or downloading the latest app.

Many cybercrooks have learned to squeeze just hard enough to get us to take needless risks online, without pressing so hard that we get suspicious and turn away.

For example, ransomware often arrives in emails that claims to be invoices or requests for quotation, giving you just enough reason to open the attached document, because it’s similar to the sort of material you receive regularly at work, but not enough to realise that it doesn’t quite add up.

Or the crooks send you booby-trapped content that pretends to cover a topic that you are interested in, such as a research paper or a news report. (Your personal interests can probably be found on Facebook; your work interests on LinkedIn.)

Likewise, a recent strain of Mac malware called Eleanor, which tried to hook your webcam up to the Dark Web, posed as a free document conversion utility.

Instead of using fear, or high-pressure techniques, the crooks relied on offering a handy utility that claimed to solve a common hassle for Mac users, knowing that anyone who tried it and deleted it later, without any obviously bad side-effects…

…would be stuck with the malware it delivered, which left behind handy intrusion and hacking tools that the crooks could come back to later.

Sometimes, just a few minutes, or even a few seconds, spent asking yourself, “Is this really a good idea?” is enough throw a spanner in the infection process.

In real life, it’s perfectly common to look before you leap, because leaping involves real physics, and real forces such as gravity.

Online, it’s easy to get into the habit of relying on some equivalent of [Undo] to try to “unleap” later on if things go wrong.

If you’re really unsure, ask someone round you for advice – but make it a genuine, real-world friend: someone you already know, and like, and trust. Don’t contact the person who sent you the email to ask them to vouch for themselves; don’t rely on calling back the phone number they gave you; and don’t use web links that they provided, either.

Of course, STOP | THINK | CONNECT. doesn’t apply only to those of us who consume online services.

It applies just as strongly to organisations that provide online services and hope that we’ll connect to them.

2016, for example, is shaping up to be the Year of The Last Year’s Data Breach, or even worse, as we hear news story after news story of massive data breaches that happened years ago.

Let’s make sure that 2020 isn’t the year that is remembered as the Year We Found Out About The Breaches of 2016 by acting now to deal with all those security improvements we haven’t quite got around to yet.

If we are more diligent about STOP | THINK | CONNECT before we put precious data where crooks can get at it, we can help everyone, including ourselves, to stay safe online.

[contentblock id=92 img=gcb.png]

[contentblock id=73 img=gcb.png]

 

Apple just released iOS 9.3.5, the latest security update for iDevice users.

We suggest you apply this update as soon as you can, and here’s why.

According to Apple’s security bulletin, it fixes three security holes along these lines:

  1. WebKit bug: visiting a maliciously crafted website may lead to arbitrary code execution.
  2. Kernel bug: an application may be able to disclose kernel memory.
  3. Kernel bug: an application may be able to execute arbitrary code with kernel privileges.

You can imagine how these three vulnerabilities could be combined into a serious exploit, where visiting a booby-trapped website might not only infect you with user-level malware, but also go on from there to promote itself to gain kernel-level superpowers.

The security built into iOS does a great job of keeping apps apart, so user-level malware is limited in what it can do: if you have a rogue GPS app, for example, it shouldn’t be able to reach across to your authenticator app and steal its cryptographic secrets.

Nevertheless, a rogue GPS app would be bad enough on its own, as it could keep track of you when you weren’t expecting it.

But if that rogue GPS app could also sneak itself into the iOS kernel, where the security checks and balances that keep apps apart are managed, then you’d have a lot more to worry about.

Loosely speaking, malware than could arrive just by clicking a web link and then boost itself automatically to kernel level would effectively be a “one-click jailbreak.”

A jailbreak is where you sneakily bypass the very security controls that are supposed to stop you bypassing the security controls, so you no longer have to play by Apple’s security rules. Notably, you are no longer restricted to the App Store, so you can follow up a jailbreak by installing whatever software you like.

Well, reports suggest that just such a one-click jailbreak has been reported in the wild: Gizmodo claims that the attack was created by an Israeli company called NSO Group that sells exploits and hacking services.

Ironically, iOS 9.3.4 came out just three weeks ago, and that update also seems to have been hurried out to close a hole that was ostensibly being used for jailbreaking.

Interestingly, another exploit-gathering company, Zerodium, last year famously offered up to $3,000,000 in bounty money for a trifecta of iOS “click-to-own” bugs, as they’re often called, and later claimed that just before the bounty expired, they’d received a bug submission that could be used for jailbreaking.

Did that bug exist, and was it one of the three that were patched in the latest 9.3.5 update?

We don’t know, but whether it was or wasn’t, you should get yourself the latest patches right away.

Go to Settings | General | Software Update and see what version you’re on right now.

Annoyingly, even though the update is just 39.5MB, you have to update via Wi-Fi. As usual, no updates are allowed via the mobile network. For urgent updates of this sort, it really would be handy for Apple to relax that restriction, especially when you think that you could just stick your SIM card in another phone, turn it into an access point, and update using the mobile network as your carrier anyway.

Follow @NakedSecurity

Follow @duckblog

 

[contentblock id=74 img=gcb.png]

Researchers at the Institute for Research in Computer Science and Automation in France (INRIA) have come up with the latest BWAIN.

A BWAIN is a Bug With An Impressive Name, and this one has a logo, too:

Sweet32 is a way to attack encrypted web connections by generating huge amounts of web traffic, in the hope that the encryption algorithm in use will eventually (and entirely by chance) leak a tiny bit of information about the traffic it’s encrypting.

Sweet32, by the way, is a play on “sweet sixteen,” with the number 32 chosen because it’s half of 64.

That all sounds rather mysterious, so we’ll do our best to explain.

Block ciphers

Data in encrypted web connections is usually encrypted with what’s called a block cipher, such as the well-known Advanced Encryption Standard (AES) algorithm.

As the name suggests, block ciphers work on chunks of data at a time, usually 16 bytes (128 bits).

By mixing up a multi-byte chunk in each encryption cycle, a block cipher not only has plenty of material to involve in its scrambling process, but also produces output a whole block at a time, which is efficient.

Of course, while you’re encrypting, you will get the same block of scrambled output every time you put in the same block of plaintext input, which is no good if you’re encrypting a message with repetition or predictable structure.

Every time you had 16 spaces in a row, or a common string of text such as GET /index.html HTTP/1.1, you’d have a recognisable pattern in the output to match the repetition in the input.

Even if the attacker can’t guess what WR9RFJKW88RFW$#D stands for in the ciphertext, he can see when it repeats, which gives away information about the plaintext – and a good cipher is supposed to prevent that happening.

So, to disguise patterns in the input, block ciphers are often used in CBC mode, short for Cipher Block Chaining.

CBC is a security enhancement that XORs the previous block of ciphertext with the current block of plaintext before encrypting each block.

The first plaintext block, of course, doesn’t have any previous ciphertext to draw on, so it is XORed with a random starting block known as the Initialisation Vector, or IV.

CBC ensures that a run of identical blocks, such as a sector’s worth of zeros, won’t encrypt into a recognisably repeating pattern of ciphertext blocks.

That’s because of the the random IV mixed in at the start, and the randomness that then percolates through the encryption of each subsequent block.

If we use P for plaintext, C for ciphertext (the encrypted output), Enc to denote the encryption function and x to count the block numbers (starting from zero), then:

The birthday attack

Remember that we’re using CBC to prevent repeating patterns in the input from repeating in the output as well.

But there’s nothing to stop two output blocks in the same encrypted message from ending up the same, entirely by chance.

In theory, the output of a block cipher in CBC mode is indistinguishable from random data, so the chance of a repeat, given that there are a whopping 2128 different 128-bit blocks, should be negligible.

But some websites and VPNs still use old ciphers that use 64-bit blocks, because smaller blocks were easier to deal with on older computers, which had slower processors and less memory.

The algorithms 3DES and Blowfish, for example, both use 64-bit blocks for encryption.

Even so, there are 264 possible ways to arrange the bits in a 64-bit block, giving about 18 million million million different blocks.

So you’re probably thinking that the chance of a collision – two identical output blocks appearing within a single encryption session – is as good as zero.

For example, if you generated 232 random blocks, you might guess that you’d have a 232 out of 264 chance of a collision, odds of one in four billion.

But the chance is much, much bigger than that, and here’s why.

Amongst your 4 billion blocks (232), you aren’t looking for a specific block to appear twice.

You’re looking for any pair of blocks that happen to be the same, whatever their value might be.

It’s like going to a cocktail party and saying, “I wonder if two people in this room right now have the same birthday?”

If there are only 50 people present, you might assume it was unlikely, given that there are 365 days in the year.

That’s because most people conceptualise this problem as if it said, “I wonder if anyone else in the room has the same birthday as me?”

In fact, with 50 people in the room, a shared birthday is close to certain, and we can show this quite easily “in reverse,” by calcuating the probability that everyone in the room has a different birthday.

The first person can have any of the 365 days in the year; the second must have any of the 364 days that don’t match the first; the third must have any of the 363 that don’t match the previous two, and so on.

We need to multiply out the probabilities that each person’s birthday avoids all the days chosen so far, like this:

That comes out at 3%, meaning that with 50 people, the chance that at least two people share a birthday is (100% – 3%), or an amazing 97%.

Indeed, you need only 23 people to get better than 50-50 odds that there’s a a shared birthday in the room.

When we make the numbers much bigger, you get to those 50-50 odds of a collision when you have 232 different samples selected from 264 possibilities.

Loosely speaking, when there are 2N possibilities and you randomly pick just 2N/2 samples, the probability of a “birthday collision” is 50%.

This is called the birthday paradox because the result feels all wrong: many people’s intuition tells them that the answer should be 2N divided by 2, but it’s actually the square root of 2N.

(Now you know where the name Sweet32 comes from, because 32 is half of 64, and 3DES and Blowfish have 64-bit blocks.)

The attack

What the researchers did is this:

  • Wait for the victim to login to the target site, thus setting a login cookie that the browser will submit in future HTTP requests.
  • Entice the victim to a second website that contains JavaScript to generate millions of requests back to the target site. (Each request will contain the login cookie, inserted into the headers by the browser.)
  • Sniff the network traffic and store it all up until there’s a collision in the encrypted data blocks.
  • Use the trick described below to decrypt the login cookie.

That sounds easy, but there are some big “ifs” here.

Firstly, the target site has to set a login cookie in a predictable way and at a precisely known position in the encrypted HTTP data blocks.

Secondly, the target site has to open a single HTTP connection and keep it open for many millions of HTTP requests, during which somewhere around 1TB of data will be exchanged.

Thirdly, the collision can’t involve just any two encrypted blocks: one has to be a block that contains the unknown login cookie data, and the other must be a block that contains data generated by the attacker’s JavaScript, so he knows what’s in it.

Recovering the cookie data

When there’s a collision, we know that two blocks encrypted to the same output.

Let’s assume that the third condition listed above is satisfied, and the collision happened between a block containing the unknown data, and a block that contained known data.

We’ll call the block with the unknown cookie in it block U, and the block with known plaintext (generated by the JavaScript) K.

From the description of CBC mode above, we know that:

CU = E(PU XOR CU-1), CK = E(PK XOR CK-1)

Of course, the collisions means that CU = CK, and if the encrypted values are the same after using the same algorithm with the same key, then the inputs must have been the same, too, so:

PU XOR CU-1 = PK XOR CK-1

Note that we can rearrange this equation and get:

PU = PK XOR CK-1 XOR CU-1

We know CU-1 and CK-1, because those blocks are part of the encrypted data we sniffed and stored, and we know PK because we chose it ourselves.

Therefore we can now calculate PU directly.

In other words, we just decrypted 64 bits, or eight bytes, of the original HTTP request, which contains (we hope) something like a login cookie or other critical data worth expending all that effort on.

Does it work?

In the real world, login cookies are usually longer than eight characters, and are therefore almost certain to take up at least two 64-bit blocks.

So once you’ve won the “birthday gamble” for the first eight bytes, you get to do it all over again to get the next eight bytes, and so on.

In their experiments, the researchers set themselves a target of recovering a 16-byte session cookie, using a Developer Edition version of Firefox to handle the connections and to run the JavaScript needed:

On Firefox Developer Edition 47.0a2, with a few dozen workers running in parallel, we can send up to 2000 requests per second in a single TLS connection. In our experiment, we were lucky to detect the first collision after only 25 minutes (220.1 requests), and we verified that the collision revealed [the plaintext we were after …T]he full attack should require 236.6 blocks (785 GB) to recover a two-block cookie, which should take 38 hours in our setting. Experimentally, we have recovered a two-block cookie from an HTTPS trace of only 610 GB, captured in 30.5 hours.

In short, this is not a very practical attack: in a day or so, you may be able to steal a login cookie for a user’s session, if they (and the web server) allow the connection to stay open for that long.

But the attack does work, and it could be used in real life, for a few very simple reasons:

  • 232, or just over 4 billion, must be treated as a tiny number these days when it comes to cryptographic hacking.
  • A web connection that lasts for more than a day, and exchanges 1TB of data, is no longer out of the ordinary.
  • Attacks like this only ever get faster, as computers get faster and can handle more memory for bigger data sets.
  • Ciphers like 3DES are still widely supported for backward compatibility, even when they aren’t needed.

As the researchers point out:

We found that 86% [of Alexa’s top one million] servers that support TLS include Triple-DES as one of the supported ciphers. Moreover, 1.2% of these servers are configured in such a way that they will actually pick a Triple-DES based ciphersuite with a modern browser, even though better alternatives are available. (In particular many of these servers support AES-based ciphersuites, but use Triple-DES or RC4 preferentially.)

What to do?

  • If you are using a VPN such as OpenVPN that uses a 64-bit block cipher by default, switch to AES instead, with 128-bit blocks.
  • If your web server doesn’t need 3DES support, remove it from the list.
  • If you support 3DES for old browsers, make sure you don’t accept it ahead of better ciphers if the connecting client supports them.
  • If you accept a 64-bit cipher, limit the time or the number of requests for which the connection will stay open.

Follow @NakedSecurity

Follow @duckblog

 

[contentblock id=74 img=gcb.png]

A growing number of people on the global level want to try their business luck online. The current trend can be compared to the times when settlers from Europe were moving to the New World. Although the context is different, the key features are the same – the new virtual world has enormous potential, but it also hides many unknown dangers. So, if you want to keep your assets safe in the web surroundings, you need to equip yourself with proper tools.

Introduce multi-level security

When you look at any website from the outside, i.e. from a hacker’s point of view, the number of layers it has might be the key element, security-wise. Since business owners are usually knowledgeable entrepreneurs, but not so proficient IT experts, they should learn what basics they should pay attention to. First and foremost, you should insist on installing firewalls, as the first layer(s) of your website protection. It is the shell that protects the core of your website. Furthermore, other levels of protection, like registration for full access, as well as search queries, are also high on the list of security priorities. By doing so, you will keep both the website and your clients safe from Structured Query Language attacks.

Complex password rule

You know those annoying websites that require passwords that consist of a proscribed number of characters, together with a combination of letters and numbers? Well, your ecommerce website should follow those exact rules. Too many people approach the issues of online security in a laid-back way. For instance, an average Facebook user reveals their birth date to all their friends on that network. It gives a skilful hacker a great basis to guess everyone’s password pretty easily, since people usually combine their names and birth dates to create passes. So, a serious website will insist on complex passwords. It might annoy their customers, but protection of clients’ data has to be the most important goal when it comes to security.

02

Strict verification system

More and more ecommerce websites ask their customers to type the CVV number from their credit cards during the checkout stage of purchase – the three figures on the back of the card – to allow them to make transactions. This is a good sign, since business owners have realized how important it is for their clients to verify their identity. In addition to the CVV number, another useful protective feature is the AVS, aka the address verification system.

On the other side, ecommerce website owners need to know that they are not allowed to keep the data about their customers, such as the abovementioned CVV codes or card numbers. This is why they should delete the old data and retain only the ones necessary for some customer-friendly issues, like refunds. By purging such confidential data, you reduce the chance of having your clients’ data hacked.

Renowned platforms for better security

You can implement all the protective measures there are in the world and your customers could still get e-robbed; if you choose a no-name platform. Although less known platforms for ecommerce websites might offer better rates and more services at a lower price, such options should be avoided. It might only be their try to win a share of the market for themselves, but you can never tell who is behind them. So, to keep your business on the safe side of the web, build your ecommerce on a tested and well-rated platform. Out of such platforms, the features offered by Magento commerce platform provide the highest level of ecommerce protection.

03

Detailed behavioral analysis

Using efficient and practical analytics tools can bring at least two benefits to your ecommerce website. Firstly, you will be able to observe your visitors’ behavior in real time, which will give you great feedback on the functionality of the site. Secondly, such an insight will allow you or your website manager to spot any suspicious behavior. The tools that enable you to watch your visitors’ habits on your website, as if they are under CCTV surveillance, should give you enough information to play your next moves for increasing the data security on the website. Read more about these tools in a piece brought by Search Engine Journal.

Whether or not your ecommerce website will have a safe shelter depends on you – its owner – and the platform on which you launch it. If you follow our guidelines and join forces with a trustworthy platform, hackers that attack you are doomed to failure.

Dan Radak is a web hosting security professional with ten years of experience. He is currently working with a number of companies in the field of online security, closely collaborating with a couple of e-commerce companies. He is also a coauthor on several technology websites and regular contributor to Technivorz.

 

[contentblock id=71 img=gcb.png]

The Washington Post recently published a list of 98 specific user details that it says Facebook keeps tabs on.

The theory is that this helps the Zuckernaut to know enough about your behaviours and interests not only to offer better value to its advertisers, but also to make you happier by showing you ads for stuff you might actually like.

(That’s called targeted advertising, where you’re the target.)

The thing is, the list contains some unusual entries that have understandably put the world into a bit of a spin, such as:

14. Square footage of home 29. Mothers, divided by “type” (soccer, trendy, etc.) 45. How much money user is likely to spend on next car 62. Expats (divided by what country they are from originally) 79. Users who are “heavy” buyers of beer, wine or spirits

Number 92 on the Washington Post’s list is probably the most perplexingly eclectic combination:

92. Users who are interested in the Olympics, fall football, cricket or Ramadan

Of course, for many users, lots of this information, such as:

2. Age 4. Gender 8. School

…doesn’t need any research or deduction by Facebook, because many people provide this willingly when they create their Facebook profile.

Similarly, information such as:

51. Operating system 59. Internet browser

…is readily gleaned from almost every web request you to make to every site, as it’s tucked into the HTTP headers.

The bad news is that this all sounds very creepy, and perhaps it is.

The good news is that Facebook has a way to review what it thinks you like, although as far as I can see, it’s not as straightforward as simply pulling up a 98-point list and editing or deleting each entry.

I logged in, went to Settings | Ads and then clicked on the Ads based on my preferences option:

There you will find a [Visit Ad Preferences] button that takes you to a page that shows what Facebook thinks you’re into.

On the Business and industry tab, I found out what Facebook thought I might like: apparently I am interested in golf and Sophos:

It would be surprising if Facebook hadn’t inferred that I’m interested in Sophos, but where my supposed interest in the Professional Golfers’ Association of America comes from I just can’t imagine.

I’m sure golf is a wonderful and companionable game, and I’m delighted that Britain won the Olympic gold medal at Rio 2016, but it’s not for me – I’d just tip 13 balls into the lake up front and free up hours of time to do something enjoyable instead.

Clearly, Facebook does figure out a lot about you as you use the service and interact with other people, many of whose interests you may share, but it’s far from precise if it thinks that golf is a key interest of mine.

Fortunately, you can use the Ad Preferences page to delete any or all of the data points that Facebook keeps on you, by clicking on an “interest” icon to bring up a delete option, although that won’t spare you from ads:

If you remove all your preferences you’ll still see ads, but they may be less relevant to you.

What I couldn’t find, but would like to have accessed to from Ad Preferences, was a one-stop page containing all the categories, as listed by the Washington Post, but it seems that until Facebook decides you are interested in X, it won’t tell you that X as a category that’s one of the 98 it keeps track of.

We’re guessing that the Washington Post figured out its 98-point list by creating an new ad, or pretending to, and browsing through all the categories that advertisers can choose from when configuring the targeting of that ad.

Have your say!

What do you think?

Is a list of categories like this (whether it really is 98, or 57, or 242) a step too far?

Or are targeted ads mostly harmless?

After all, you’re going to be getting ads anyway – so what’s the harm in making them at least vaguely relevant, based on information you’ve already revealed to Facebook?

Follow @NakedSecurity

Follow @duckblog

 

[contentblock id=73 img=gcb.png]

Is Your COMPANY's Data on the Dark Web, Find out TODAY!!!

GET YOUR FREE DARK WEB SCAN TODAY!!!

Copyright © 2015 - 2018 Sentree Systems, Corp.. All rights reserved.

Sentree Systems, Corp. | 6137 Crawfordsville Rd Ste F #177 Indianapolis, IN 46224 | 317-939-3282