Ruminations on Architecture & Security

January 2, 2009

A new year … a new blog

Filed under: Identity — Bavo De Ridder @ 5:40 pm

I have moved my blog from a hosted wordpress account at wordpress.com to a self-hosted version. The new blog can be found at

http://blog.bavoderidder.com/

The RSS feed is at http://blog.bavoderidder.com/?feed=rss2

All existing posts and comments have been migrated. See you at the new site!

December 19, 2008

Disturbances in the cloud

Filed under: Privacy,Security,SOA — Bavo De Ridder @ 2:32 pm

Cloud computing is cool, no doubt about that. There have never been more good looking and futuristic looking schematics been made in Visio. Thousands of presentations, workshops and even conferences have been held on the subject.

One question however has not be clearly answered yet … what about data ownership? What about privacy of that data? When your applications are running in the cloud you are also handing over your data to whoever is running the data center. How sure are you that they protect this data as they should do? What about these situations:

  1. Your cloud partner goes out of business and your data becomes a valuable asset that can be sold to pay of debt. How well are you protected from this scenario? Or … what are the guarantees about confidentiality? Think SalesForce …
  2. Your cloud partner goes out of business without any warnings, your applications are offline, your data is not accessible. Worst case you got a couple of days notice, best case a couple of weeks. Does your disaster recovery plan takes this into account? How fast can you move to a new cloud partner or your own data center? How much data will you loose? How recent is the data you go online with after recovery?
  3. Your cloud partner decides to disable a feature in their application, a feature you depend on. Does your disaster recovery plan takes this into account? This is not far fetched, in a small way this is what happened when Microsoft decided to disable anonymous comments on their Live Blog. They even did this retroactively and so revealed identity information of authors who previously had been anonymous.

None of these scenarios is purely technical in nature and none of these scenarios are far fetched. You can probably think of many more realistic and sure to happen situations.

In relation to the 3th scenario … how many companies have application versions that are far behind the lastest public version purely because of functionality or compatibility they depend on? At least all of the companies I have came into contact with are in this situation. If you run everything on your own servers you have a greater deal of control then you can imagine at first. Companies should do their homework when moving some of this into the cloud, they are often giving up far more control then they think they do and want to do. Contracts alone won’t solve it either.

December 5, 2008

Prank calls and counter measures.

Filed under: Security — Bavo De Ridder @ 1:15 pm

Gunnar Peterson linked to this fun but interesting story:

“He sounded just like Obama,” she said on Thursday, referring to President-elect Barack Obama.
Sensing she was the victim of a spoof by a South Florida radio station, she promptly disconnected the call.

Trouble was, it was Obama.

A chagrined Ros-Lehtinen told the Fox News Channel that she also hung up on Obama’s chief of staff, Rahm Emanuel, when he called her back to explain it really was the next president on the line.

Both Emanuel and Obama tried to convince her the call was for real.

“Guys, it’s a great prank, really,” she said she told them.

It took a subsequent call from California Democratic Rep. Howard Berman, chairman of the House Foreign Affairs Committee, to finally convince Ros-Lehtinen to talk to Obama.

To convince her that it really was Berman, she said she told him, “Give me the private joke that we share.”

These type of prank calls, when someone calls you and pretends to be a high ranking official in some country, used to be low probability and medium impact risks. Low probability since there was a border most people did not cross. We all laughed with prank calls where a radio presentator pretended to be some unknown person who wanted to order something so extraordinary that it was hard to believe it was true … but funny. But pretending to be the President of France or the President-elect of the United States, that was a whole different story. That was not done, who knows what the repercusions would be! It was also a medium impact risk because in most cases the victim of the prank call was not a VIP or anything, often even just a receptionist at a company. The impact was all personal and completely forgotten after a couple of weeks.

But recently some people changed all this. Today this is a medium to high probability and high impact risk. Certainly a higher probability since others have done it and got away with it. Surely a higher impact was well, just look at what happened to Sarah Palin. It’s not something that is forgotten after a couple of weeks, this is something that sticks to your career now.

But my congratulations to Ros-Lehtinen who did not only recognize the change in the risk profile but also employed a simple but effective counter measure: use of a shared secret.

December 1, 2008

And the solution is … SSL!

Filed under: Architecture,SOA — Bavo De Ridder @ 5:53 pm

Today I attented a talk from Microsoft about their new Azure Cloud Computing platform. They had hired David Chappell to present the first topics that introduced the whole concept and the specific offering Microsoft is making in this area.

It was all interesting, David Chappell is a gifted speaker. At one point however I got disappointed. David was explaining how REST was a very good choice for communicating with cloud services. Amazon, Google, Microsoft … they all have cloud data services that can be accessed using a REST api. Someone in the audience asked how, if they don’t use SOAP with WS-*, how can they secure this. David’s answer came quickly “oh … use SSL … it’s only one endpoint talking to another endpoint, SSL can secure that”.

The days that there was always single network connection between the consumer (client) and the producer (server) are also over. At both sides, the message passes through various firewalls, gateways, messaging infrastructure … before being delivered to the real message endpoints. You can’t use SSL to protect it all. SSL will just protect a single network connection.

With the current solutions and architectures, people need to understand that message level (or application level) security can never be achieved by depending on transport level security alone. There are just too many hubs between the two endpoints. You need appropiate security controls at the message level as well. If people insist on using REST for scenario’s that go beyond low assurance needs, they must think about message level security and trust controls that are independent of the transport layer. If we continue to neglect message level security in REST while at the same time promoting the usage of REST in cloud data services, we are destined for a security nightmare in the near future.

The topic of REST and security is also definitely not new.

While we are at the topic of “built security in, from the start” … may I kindly ask both Microsoft and Adobe to support WS-* security standards in their RIA technologies (SilverLight and Flex). If Microsoft really is serious about security as they so often claim these days, then why does SilverLight 2.0 does not have support for web services security beyond just SSL? It looks like a missed opportunity for which we will pay dearly in a couple of years.

On the bright side … a lot of the services Microsoft will offer on their Azure platform will have full and first class support for claims based access control. At least a standards based authentication is possible. They do seem to think it also solves the authorization problem … that is wrong. Perhaps more on that later.

September 24, 2008

Fly secure, don’t drink

Filed under: Security — Bavo De Ridder @ 2:15 pm

We all know how these days you are not allowed to bring any significant amount of liquid on the airplane. Every liquid you do bring with you is taken away swiftly. Bruce Schneier has an excellent blog entry on the usefulness of this rule.

In Belgium we have this television series “Airport Security” about the day to day aspects of security on our national airport (“Brussels Airport”). It actually is a spin off from similar US and UK shows. In one of the episodes they showed how they confiscated liquids. After a couple of days all the bottles amounted to a fairly large pile. All nicely tucked away in plastic storage boxes. Their content is however not safely disposed of (after all, they can contain potential explosives), they give it all away to a charity organization who then distributes it to people in need.

Although I support the fact they want to help charity organizations, it seems a bit illogical to me. One minute they threat these bottles as potentially dangerous, confiscating them without exception, the next minute their risk level seems to drop to zero and they are handed out to charity.

As Bruce states in the above mentioned article:

If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it’s not dangerous, then stop trying to keep it off airplanes.

So either we stop confiscating those liquids or we start handling them as really having a risk level: threat anyone who tries to bring it on as potentially dangerous and safely dispose of the liquids. Our current procedures are just stupid, annoying, incomplete and don’t add value to protecting those who travel by air.

September 12, 2008

DropBox + PasswordSafe = Good ??

Filed under: Access Control — Bavo De Ridder @ 10:25 am

When I read Joel Spolsky’s post about DropBox combined with PasswordSafe I kind of fell from my chair. Apparently he was looking for a way to store his passwords in a safe way and be able to access them on any computer he uses. This is what he proposes:

  1. Install DropBox on all your computers. DropBox is a simple tool to synchronize a local folder to an Internet site. It will synchronize the contents of the folders so you’ll have your data, latest version, available on all your computers. Note that DropBox is secured by a username and password send over the Internet (using SSL of course, at least I hope it is).
  2. Install PasswordSafe on all your computers. This is an application that creates a database to store and generate passwords. It uses a password to encrypt the database. The usual algorithm is deployed: the password is used as input for a derived key function which is then used to encrypt the database. PasswordSafe can generate long and random passwords for you and helps you enter them into login forms.
  3. Store the PasswordSafe database in the DropBox folder.
  4. Password Nirvana!

Joel even suggests to go all the way and change your bank account password to something really hard (like 16 random characters) and store it in PasswordSafe.

Joel seems to think that this is really all safe since he is using long and hard passwords on websites (those 16 random PasswordSafe passwords) and the “derived key function” used to encrypt his PasswordSafe database. Well Joel … I don’t think so. This is a clear case of “security dependencies” or “weakest link” …

Let’s see what I need to do to get at Joel’s really long and hard to guess 16 character password for his bank account.

  1. I need to hack into his PasswordSafe database. In order to do that, I first need access to it …
  2. I need to hack into his DropBox account. Doing that requires the usual hacking of a username and password on a Internet site. With the DNS flaws and various Phishing techniques that is not even that hard these days. Not to mention that this is worth the effort, after all it will give me access to his bank account!
  3. Now that I have his PasswordSafe database, I need to decrypt it. I don’t care a single moment about the strength of the encryption algoritm nor do I care about the valueness of the derived key function. The only thing I need to know is his password. Since I have the database offline and there is no mechanism whatsoever to discourage a brute force guessing attack, this is purely a matter of time. The attack is even undetectable since it happens on my local infrastructure.

Whatever cool encryption and synchronisation mechanism this setup uses, eventually the entire security depends on just a username and password. Since he wanted to protect a password login in the first place (his bank account) I wonder what he actually achieved in terms of increased security.

My first idea would be to say that he has replaced password based security with … password based security. The only that has changed are some extra, but minor, hurdles to hack it all. But I would even go further. He ends up less secure since cracking his PasswordSafe opens up all his accounts for me, not just his bank account, and the overall attack is less detectable then if I would hack his bank account directly.

Addendum … this article discusses the same topic but using different examples:

To use an analogy (certain to spike my readership, even if only till the US political process spits out some other triviality to focus on) you can put lipstick on a pig, but all you’ll end up with is a cosmetically enhanced porker.

Similarly, you can plaster on the lipstick of strong authentication like Tammy Faye but, if you are smearing it onto a pig of an identity proofing procesess, you’ll still be eating the bacon of low assurance …

July 8, 2008

HR, your source of identies?

Filed under: Identity — Bavo De Ridder @ 10:26 am

For a few years I had the pleasure to work for Novell. I did several consulting projects with Identit Manager and even have some experience with the predecessor DirXML. After the Novell era, I worked for an independent service provider and got to know Sun Identity Manager and IBM Tivoli Identity Manager. This just to say that I have at least some experience in the world of Identity Management and directory synchronisations.

Matt Flynn is chiming in on the virtual directory versus meta directory “blog wars” that have been going on earlier this year. You can catch up here, here, ah, also here and then here as well.

In that post Matt Flynn starts with a simple scenario: there is an HR database, an Active Directory and a custom build SQL identity store. So far so good, that looks like something standard and simple. Then he continues by requiring that the HR database is the primary source for account creation and status.

This is where I have to disagree, strongly disagree. For years IDM product vendors have been telling us that the HR database should be the primary source for Identity information. This is just not true. The HR platform can not fulfil this role of primary source. The platform has been built and is driven by the need to manage the employee status of people and make sure they are paid properly and in time. This difference between what the HR platform actually is and what IDM product vendors want it to be, becomes more visible if you look at the following typical issues:

  • New employees are not entered fast enough in the HR system. The IDM system can’t act on events if they don’t happen in time.
  • Some of the attributes kept in the HR system are of lesser importance to HR and therefore typically are of lower (data) quality. The IDM system however depends on correct and up to date values for these attributes.
  • When employees migrate internally (to a different department or business division) the HR system often lags behind in changing the employee records. It also rarely models the typical transition periods involved in migrating.

For me these are all signs that the HR system, at least as they are managed today, should not be used as a primary source for account creation and status. In fact, the HR system should probably be “just a slave” of the IDM system. Leave the HR system for what it is: a system for managing the legal and financial aspects of an employee.

If you use the HR system as your primary source, you will soon find yourself implementing numerous ugly hacks and workarounds to compensate for low quality data and events that are either triggered too late or without enough detail. Demanding that the HR department should get their act together and improve is not a good solution. Doing identity management is not their job, they manage the legal and financial relationships. That’s just a part of the Identity. It’s the IDM product that should manage the identity and inform the HR system of changes that are relevant to the legal and financial aspect of the relationship.

None of the current IDM product vendors however have a product that can serve this role. As far as I know, most of these products are expensive data synchronisation tools with some workflow and UI layers on top. As the years pass by, I am wondering if any of these vendors is ever going to radically change and improve how (enterprise) Identity Management is dealt with. Since the first of these IDM products, over 10 years ago, not much has changed. It’s just more of the same.

July 3, 2008

YouTube vs. Viacom … what about privacy?

Filed under: Privacy — Bavo De Ridder @ 4:51 pm

Most of you have probably heard about the case where a judge ordered Google to turn over every record of every video watched by YouTube user. That includes the user’s name and IP addresses. This in response to complaint filed by Viacom against Google for allowing clips of its copyright videos to appear on YouTube. Read about it here. This is the actual ruling from the judge.

I am not going to comment on the copyright issues or the actual complaint filed. I am however worried about the consequences for online privacy. A lot of users will see their personal information being handed over to Viacom even though they probably never watched a single copyrighted clip or at least were not aware of infringing anyone’s copyright. Somehow this reminds me of the toystar.com case. A company selling toys, files for bankrupcy and tries to sell their customer database to the highest bidder. It was eventually stopped by the FTC.

People can hand out personal information to sites and even carefully review the privacy terms before doing so. It means nothing if this kind of rulings can mean your information is handed over to a third party. It would be a different case if that information helps law enforcement agencies to detect crimes and prosecute criminals. I trust law enforcement agencies more then Viacom to properly process that data. Does Viacom give any guarantees on safeguarding this data? Will the processing be transparant and with full disclosure to the users involved?

June 23, 2008

From here to there … AS-IS and TO-BE

Filed under: Architecture,Process — Bavo De Ridder @ 2:36 pm

All people in ICT have come across projects that will replace a current situation (the AS-IS) with a desired future situation (the TO-BE). At first sight it looks great: you analyze the current situation, including shortcomings and issues, and you document it in an AS-IS document. Then based on the results of this AS-IS and various gathered requirements you design and document the new state: the TO-BE. Looks good right? Not to me …

The drivers for these projects are often the same:

  1. an increasing perception that the current system is unable to fulfill the needs the system was created for in the first place.
  2. a general feeling that extending the current system, for instance to solve some of the issues, is becoming too expensive or too complex.

Before I would start designing a future TO-BE, I would like to now why the system as it is today, the AS-IS, is not fulfilling it’s goals anymore. Is it because technology has changed significantly during it’s life time? Is it because the people who designed, developed and maintained the system haven’t done a decent enough job? Is it because the system, once a perfect fit for the problem, started to loose alignment with the environment, slowly being rendered obsolete and in need of replacement?

Without proper answers to these questions and without a proper response in the TO-BE, that TO-BE is surely destined to become your next AS-IS. In a few years we will no doubt witness a presentation that explains us how the AS-IS (that TO-BE we are building today) is not good enough anymore and needs to be replaced with something new and more modern.

The world is constantly changing, so is your company and the environment it lives in. Any ICT system that operates as part of your company must realise it needs to change to keep in lign with that changing environment. If you only focus on building a static architecture that is unable to adapt to changes, you are doomed to recreate the system, in the form of a desired TO-BE, every couple of years.

Only during a smaller part of the existence of such a system, it is properly aligned to actual requirements. Most of the time in the lifespan of the system, is spend in either complaining about lack of alignment or promising improvement with the upcoming TO-BE.

I therefore don’t really believe in this AS-IS and TO-BE methodology. When you realize you are lagging behind while the world around you is changing, you won’t solve the problem by desperately catching up to the present. Because when you finally caught up (the TO-BE is delivered) you are already lagging behind again. Even if you went to great lengths to make that TO-BE as flexible as possible, you can never predict the future. If you can, give me a call.

What you want is a process that:

  1. periodically measures how well the system is aligned to the environment,
  2. identifies those elements of the system that are in danger of losing alignment,
  3. proposes gradual changes to the system to improve alignment.

Note how nowhere in this process we propose to redesign and reimplement the system. At a smaller scale this technique is well know in software development: it’s called refactoring. This is exactly what you also want to do at a larger scale with your architecture: refactor mercilessly. Refactoring should not be limited to the development phase but should be an integral part of the entire life cycle of a system.

Given a proper refactoring process and the obvious current, AS-IS, state of a system, I can gradually improve and align that system to an ever changing environment until the need of the very existance of the system itself is disappearing. I should avoid a big bang approach that proposes and develops a brand new TO-BE system.

Building for change is not a new slogan, yet it is not well understood nor implemented. Every day projects are born that are meant to create a new TO-BE and, sadly enough, at the same time the AS-IS of tomorrow.

May 27, 2008

Flex … not that flexible it seems …

Filed under: Architecture — Bavo De Ridder @ 9:36 am

In the last few weeks I have come into contact with Adobe Flex to create a separate Flex front-end that talks to a back-end using web services. The advantage would be rapid development of a front-end that can use the tons of fancy UI features offered by Flex.

After a few proof of concepts I quickly ran into several issues. The most important being that Flex does not support WS-Security. Read that again: Flex does not support WS-Security. Note that Flex is positioned as something that prefers to use web services to talk to the back end. Also check this bug report. There are some tutorials that explain how you can cheat and add WS-Security headers yourself. This is obviously limited to simple headers and does not include signing or encryption.

I wonder how Adobe can keep on positioning Flex as a great enterprise capable way of creating portable rich front ends when they don’t support WS-Security. They don’t support any of the WS-* standards.

Not supporting WS-Security is one thing, it might be on the road map but not yet implemented. There is however something else in that bug report that caught my attention …

dashes (-) are not allowed while naming things like classes, variables, attribute, etc in AS3. The elements named with dashes, when mapped to AS3 objects will not compile.

Gasp. Not allowing dashes in naming is something that other languages also do but having a standard mapping (XML/SOAP to ActionScript 3) that does not take this into account is more severe. They obviously didn’t test the mapping extensively or they did and ignored the results.

For me, as a developer, this is also an indication that their underlying code that maps XML to ActionScript objects started as a quick & dirty implementation to support simple demonstrations and somehow grew into code that went into the production version. The fact that they don’t support any of the WS-* standards only supports this theory.

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.