Data Leakage Prevention – Protecting Sensitive Information

When DuPont lost $400 million in intellectual property, it wasn’t because a hacker from the other side of the world infiltrated their system. The information was simply stolen by a former employee. Alarmingly, data loss incidents are not always caused by deliberate actions.

A file containing personal information accidentally attached to an email and sent to multiple recipients; financial data stored in a USB pen drive, accidentally left in a restaurant; or bank account data of colleagues, inadvertently posted on a company website – these are also some of the everyday causes of data loss.

A report done by research company Infowatch regarding global data leaks in 2010 showed that there were actually more accidental data leaks in that year compared to intentional ones. Accidental leaks comprised 53%, while intentional leaks comprised 42% (the rest were unidentified).

But even if they ?only? happened accidentally, breach incidents like these can still be very costly. The tens of thousands of dollars that you could sometimes end up paying in civil penalties (as in the case when you lose other people?s personal information) can just be the beginning. More costly than this is the loss of customer and investor confidence. Once you lose those, you could consequently lose a considerable portion of your business.

Confidential information that may already be leaking out right under your nose

With all the data you collect, process, exchange, and store electronically every day, your IT system has surely now become a storehouse of sensitive information. Some of them, you may be even taking for granted.

But imagine what would happen if any of the following trade secrets fell into the wrong hands: marketing plans, confidential customer information, pricing data, product development strategies, business plans, supplier information, source codes, and employee salaries.

These are not the only kind of data that you should be worried about. You could also get into trouble if your sloppy IT security fails to protect employee or client personal information such as their names; social security numbers; drivers license numbers; or bank account numbers and credit/debit card numbers along with their corresponding PINs.

In some countries, you could face onerous data breach notification requirements and heavy fines when these kind of data are involved.

There are now more holes to plug

It’s not just the different varieties of sensitive electronic information that you have to worry about. Because these data can take on different forms, i.e. data-at-rest, data-in-motion, and data-at-the-endpoints, you also need to take aim at different areas in your IT system.

Sensitive information can be found ?at rest? in each of your employees? hard disks, in your servers, storage disks, and in off-site backup disks. They can also be found ?in motion? in email, instant messaging, social networking messaging, P2P file sharing, ftp, http, and so on.

That’s not all. Your highly mobile workforce may have already introduced yet another high-risk area into your system: data-at-the-endpoints. This includes USB flash-disks, laptops, portable hard disks, CDs, and even smartphones.

The main challenge of data leak prevention

Having been made aware of the various aspects of data leakage, have you already come to grips with the extent of the task at hand?

There are two major things you need to do here to prevent data leakage.

One, you need to identify what data you have that can be considered as sensitive/confidential information. Of course you have financial information and employee salaries in your files. But do you also store personally identifiable information? Do you have trade secrets that are stored in electronic form?

Two, you need to pinpoint their locations. Are they only on your hard disks and laptops? Or have they made their way to flash drives, CDs/DVDs, or portable HDDs? Are they being transmitted through email or any other file transfer media?

The reason why you need to know what your sensitive data are as well as where they are is because you would like all efforts of securing them to be as efficient and unobtrusive as possible.

Let’s say, as a way of protecting your data, you decide to implement encryption. Since encryption can consume a lot of storage space and significantly reduce performance, it may be impractical to encrypt your entire database or all your files. For the same reason, you wouldn’t want to encrypt every single email that you send.

Thus, the best way would be to encrypt only the data that really need encryption. But again, you need to know what data needs to be encrypted and where those data can be found. That alone is no simple task.

Not only will you need to deal with the data you already have, you will also have to worry about the data that will go through your systems during the course of your day-to-day transactions.

Identifying sensitive data as it enters or leaves your system, goes through your network, or gets stored in your file system or database, and then applying the necessary security actions should be done automatically and intelligently. Otherwise, you could end up spending on a lot of man-hours or, worse, wasting them on a lot of false positives and negatives.

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK

Check our similar posts

How DevOps oils the Value Chain

DevOps ? a clipped compound of development and operations – is a way of working whereby software developers are in a team with project beneficiaries. A client centred approach extends the project plan to include the life cycle of the product or service, for which the software is developed.

We can then no longer speak of a software project for say Joe?s Accounting App. The software has no intrinsic value of its own. It follows that the software engineers are building an accounting app product. This is a small, crucially important distinction, because they are no longer in a silo with different business interests.

To take the analogy further, the developers are no longer contractors possibly trying to stretch out the process. They are members of Joe?s accounting company, and they are just as keen to get to market fast as Joe is to start earning income. DevOps uses this synergy to achieve the overarching business goal.

A Brief Introduction to OpsDev

You can skip this section if you already read this article. If not then you need to know that DevOps is a culture, not a working method. The three ?members? are the software developers, the beneficiaries, and a quality control mechanism. The developers break their task into smaller chunks instead of releasing the code to quality control as a single batch. As a result, the review process happens contiguously along these simplified lines.

Code QC Test ? ? ?
? Code QC Test ? ?
? ? Code QC Test ?
? ? ? Code QC Test
Colour Key Developers Quality Control Beneficiary

This is a marked improvement over the previously cumbersome method below.

Write the Code ? Test the Code ? Use the Code
? Evaluate, Schedule for Next Review ?

Working quickly and releasing smaller amounts of code means the OpsDev team learns quickly from mistakes, and should come to product release ahead of any competitor using the older, more linear method. The shared method of working releases huge resources in terms of user experience and in-line QC practices. Instead of being in a silo working on its own, development finds it has a richer brief and more support from being ?on the same side of the organisation?.

The Key Role that Application Program Interfaces Play

Application Program Interfaces, or API?s for short, are building blocks for software applications. Using proprietary software-bridges speeds this process up. A good example would be the PayPal applications that we find on so many websites today. API?s are not just for commercial sites, and they can reduce costs and improve efficiency considerably.

The following diagram courtesy of TIBCO illustrates how second-party applications integrate with PayPal architecture via an API fa?ade.

Working quickly and releasing smaller amounts of code means the OpsDev team learns quickly from mistakes, and should come to product release ahead of any competitor using the older, more linear method. The shared method of working releases huge resources in terms of user experience and in-line QC practices. Instead of being in a silo working on its own, development finds it has a richer brief and more support from being ?on the same side of the organisation?.

imgd2.jpg

The DevOps Revolution Continues ?

We close with some important insights from an interview with Jim Stoneham. He was general manager of the Yahoo Communities business unit, at the time Flickr became a part. ?Flickr was a codebase,? Jim recalls, ?that evolved to operate at high scale over 7 years – and continuing to scale while adding and refining features was no small challenge. During this transition, it was a huge advantage that there was such an integrated dev and ops team?

The ?maturity model? as engineers refer to DevOps status currently, enables developers to learn faster, and deploy upgrades ahead of their competitors. This means the client reaches and exceeds break-even sooner. DevOps lubricates the value chain so companies add value to a product faster. One reason it worked so well with Flickr, was the immense trust between Dev and Ops, and that is a lesson we should learn.

?We transformed from a team of employees to a team of owners. When you move at that speed, and are looking at the numbers and the results daily, your investment level radically changes. This just can’t happen in teams that release quarterly, and it’s difficult even with monthly cycles.? (Jim Stoneham)

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK
Without Desktop Virtualisation, you can’t attain True Business Continuity

Even if you’ve invested on virtualisation, off-site backup, redundancy, data replication, and other related technologies, I?m willing to bet your BC/DR program still lacks an important ingredient. I bet you’ve forgotten about your end users and their desktops.

Picture this. A major disaster strikes your city and brings your entire main site down. No problem. You’ve got all your data backed up on another site. You just need to connect to it and voila! you’ll be back up and running in no time.

Really?

Do you have PCs ready for your employees to use? Do those machines already have the necessary applications for working on your data? If you still have to install them, then that’s going to take a lot of precious time. When your users get a hold of those machines, will they be facing exactly the same interface that they’ve been used to?

If not, more time will be wasted as they try to familiarise themselves. By the time you’re able to declare ?business as usual?, you’ll have lost customer confidence (or even customers themselves), missed business opportunities, and dropped potential earnings.

That’s not going to happen with desktop virtualisation.

The beauty of?virtualisation

Virtualisation in general is a vital component in modern Business Continuity/Disaster Recovery strategies. For instance, by creating multiple copies of virtualised disks and implementing disk redundancy, your operations can continue even if a disk breaks down. Better yet, if you put copies on separate physical servers, then you can likewise continue even if a physical server breaks down.

You can take an even greater step by placing copies of those disks on an entirely separate geographical location so that if a disaster brings your entire main site down, you can still gain access to your data from the other site.

Because you’re essentially just dealing with files and not physical hardware, virtualisation makes the implementation of redundancy less costly, less tedious, greener, and more effective.

But virtualisation, when used for BC/DR, is mostly focused on the server side. As we’ve pointed out earlier in the article, server side BC/DR efforts are not enough. A significant share of business operations are also dependent on the client side.

Desktop virtualisation (DV) is very similar to server virtualisation. It comes with nearly the same kind of benefits too. That means, a virtualised desktop can be copied just like ordinary files. If you have a copy of a desktop, then you can easily use that if the active copy is destroyed.

In fact, if the PC on which the desktop is running becomes incapacitated, you can simply move to another machine, stream or install a copy of the virtualised desktop there, and get back into the action right away. If all your PCs are incapacitated after a disaster, rapid provisioning of your desktops will keep customers and stakeholders from waiting.

In addition to that, DV will enable your user interface to look like the one you had on your previous PC. This particular feature is actually very important to end users. You see, users normally have their own way of organising things on their desktops. The moment you put them in front of a desktop not their own, even if it has the same OS and the same set of applications, they?ll feel disoriented and won’t be able to perform optimally.

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK
ESOS What is the Truth?

When the UK administration introduced its ESOS Energy Savings Opportunity Scheme reactions from business people followed a familiar theme.

  • Do nothing it will go away
  • The next Westminster will drop this
  • Another stealth tax. I don’t have time for this
  • Give the problem to admin and tell them to fix it

ecovaro decided to share three facts with you. These are

(1) ESOS is not a government money spinner

(2) all major political parties support it, and

(3) it is a cost-effective way to put money back in your pocket while feeling better about what business pumps into the environment.

Four More ESOS Facts

1. You Cannot Give the Problem to Admin ? Energy is technical. The lead belongs with your operations staff because they understand how your systems work. Some things are best outsourced though. ecovaro is here to help.

2. ESOS is Not Going to Go Away ? A company inside the regulation net must submit its first report by 6 December 2015. Non-compliance risks the following penalties:

  • ?5,000 for not maintaining adequate records
  • ?50,000 for not completing the assessment
  • ?50,000 for making a false or misleading statement

3. The Employee Count is the Annual Average – The employment criteria (unlike balance sheet and turnover) is the monthly average of full and part-time employees taken across the full financial year. The fact you have <250 employees in December 2015 when the first report is due does not necessarily let you off the hook.

4. The 6 December 2014 Report is No Big Deal ? When you think about it the administration is hardly likely to spend years wading through 9,000 detailed company energy plans. It has no authority to comment in any case. All that is required is for a senior director to confirm reading the document, and a lead assessor to agree it complies with the law.

Does this mean that ESOS is a damp squib? We do not think so, although some firms may take the low road. ecovaro believes the financial benefits will carry the process forward, and that the imperative to make the world a better place will do the rest.

Ready to work with Denizon?