Category: Uncategorized

An idea for improving credit card security

It seems like there’s always some company or other letting its credit card database fall into the hands of a malicious party, whether it’s internal or through the Internet. It’s obvious that people don’t know what they’re doing when it comes to taking necessary precautions to protect their data, and that they’re really good at storing it in plain text, which makes it simple to index.

What if it were possible for the model to shift in a manner that both introduced multi-factor protection on data and lets the clueless keep doing business without needing to learn anything new or change their processes at all (particularly that plain-text thing, since it’s so efficient)?

What I’d suggest is really quite simple: don’t store credit card numbers anywhere. At all.

Now, of course, the first response I’d expect to hear, were I to stop there, is “but we can’t take a hash and reconstruct a value understood by the processors, and encryption still involves storing the number!”, and whoever were to say that would be entirely right. The next thing I’d expect to hear is “but if we do away with credit card numbers, users won’t know how to pay for things!”, and this, too, is right.

Credit cards wouldn’t have to go away or change form at all. Numbers would still be transmitted over the Internet, covered by whatever transport-layer security is deemed acceptable by the involved parties, but something magical would happen when they’re received: they’d be sent to the credit card company, just as they are now. Shocking revolutionary twist, isn’t it?

No, it’s not quite that simple, but it’s pretty close. The credit card number would be sent along with a credit-card-company-issued merchant-ID and a unique identifier would be sent back, forever marrying the customer’s payment method with the merchant (no cryptographic integrity needed — it can be a predictable, sequentially-issued number, so long as it draws from a universal, multi-merchant-accessible pool to let entropy do its thing to prevent systematic lookups). Upon receipt of this identifier, the credit card number would be discarded and the identifier could be stored as an attribute of the user’s account on the merchant’s end. When the user wants to make another purchase, the identifier gets sent to the credit card company again and the onus is solely on that party (which should have nigh-impenetrable security) to figure out which financial institution’s patron to charge for the transaction.

“Okay, so… uh… what if the identifier gets compromised?”, I imagine to be the looming question to be. The answer is elegant in its absolutism: nothing.

The identifier uniquely identifies a pair of entities: the customer and the merchant. If the credit card company receives an identifier from a party other than the merchant to which it is registered, the transaction is void (and easy to stop, through an immediate severance of the binding) and if the merchant attempts to act fraudulently, the customer has a very clear path of recourse. Consumer-protection failsafes notwithstanding, the only circumstances under which an attack would be profitable to a malicious party would be if the merchant provided goods or services they cared about, and for that vector to be exposed, the attacker would need to have inside knowledge of how data actually flows between components of the merchant’s architecture, which would require the compromise of many disparate systems, providing multi-factor protection through the omnipresent mashup of security-oriented design and brain-breaking “MAKE IT WORK NOW” kludges.

To summarise this idea, a single sufficiently large numeric identifier (probably 1024-bit to start, which allows potential for easy expansion and eternal backwards-compatibility) would identify a customer (credit card) and a merchant (registered business) in a manner intelligible only to the credit card provider, the party with the greatest investment in security. This identifier could be stored unencrypted in databases and be protected by processes and checks, rather than arms-race-oriented encryption and social trust.

The fact that this identifier is bound to exactly one relationship means that its external value is non-existent and that its potential for harm, should it slip, is limited to that one relationship, making it something that can be treated as an attribute of another pre-existing parallel relationship (the user’s account with the merchant — though one-offs are fine, too, if the merchant doesn’t need to request an identifier), which allows whatever security resources a merchant has to be focused on protecting user data in general, for the company’s benefit, rather than the patron’s. (Authentication like Amazon’s or that of reputable banks would be a beneficial, but unnecessary, complement)

Lastly, the way that the identifier is known to only two parties (the merchant and the credit card company) means that it can be quickly abandoned (from the merchant’s end) if it ever gets compromised, without having any more impact than requiring a new identifier to be issued in a subsequent transaction.

Just a thought, though. Feel free to cite this to help overturn any patents that may be issued around a similar idea — security shouldn’t be encumbered.
Seems Moneris has been doing something similar for a while. Oh, well. At least the idea’s been implemented in some capacity.

Symmetric encryption in pure Python (Blowfish)

For a project at work, it became obvious that I would need to implement some form of partial encryption between hosts in a self-configuring network. Nothing super-extreme, of course, since all traffic will exist within a closed environment; I just need enough protection to prevent casual observers from finding out enough about the protocol to inject malicious packets and provide some simple handshaking between the components of the system.

Blowfish came to mind as a good scheme for handling this (each host and operator can be considered sufficiently secure, so sharing an application-specific pass-phrase via config-file is an acceptable solution), and a small amount of Googling turned up http://ivoras.sharanet.org/projects/blowfish.html (based on http://felipetonello.com/scripts/python/blowfish.txt, though I doubt that’s the original site), which I ended up using as the basis for my implementation. Actually, ‘basis’ isn’t the right word, since I didn’t change any logic at all. Rather, I just replaced the more antiquated data-types and access methods with more modern equivalents, restructured the layout, and sought to bring things more in line with PEP 8. The result being a slightly faster, leaner implementation that’s a bit more readable.

My code, with an identical interface to Ivan Voras’s version, dual-licensed under the GPLv1 and Artistic Licenses, like the original, is reproduced below:
(more…)

A month of silence

Between a general lack of free time and a number of short-term tasks, I haven’t had much of an opportunity to do anything with the projects hosted here (or even to prepare for PyWeek). Work will resume next month, once I’ve got a real schedule around which to plan my time.

What time I’ve had has been contributed to the development of Quickshot for the Ubuntu Manual project, so it could make the deadline imposed by the launch of 10.04. Progress is steady, so look forward to the first release of the manual in its many languages, if you’ve been on the fence about switching.

Pretty At3 CDs

My At3 discs finally arrived (though FedEx’s duties overhead for Canada is pretty brutal). I’ll start adding content to the Hymmnoserver based on what I glean from the inserts and aquagon‘s updates to the Conlang article as soon as I get a break from work.

Tracklist translations: OST (disc 1, disc 2), Hymmnos concert side (blue, red)

Update: Unfortunately, that “break from work” probably won’t be happening this weekend. More unpaid overtime for me.

hamsterx.homelinux.org deprecated

uguu.ca has succeeded hamsterx.homelinux.org as home for all projects it once hosted. All users of the old site are strongly encouraged to switch to the new resources, categorized under the links at the top of this page, at their earliest convenience.

Of course, services like the Hymmnoserver, read-only subversion repository access, project wiki, static resources, and user directories will remain operational for the forseeable future, to make the migration process as smooth as possible. They will all be fully accessible until the fate of the network connection that currently links to the old server is decided later this year, at which point 301s and 410s will be deployed as necessary to keep search engines happy.

One exception to the keep-everything-alive-as-long-as-possible plan is that the wiki will no longer be maintained and will be entirely decommissioned once it is clear that Google’s spider has found and prioritized this site. This is to prevent duplication of effort and content drift, the avoidance of which benefits everyone. Once Google’s spider has picked up on the 301s served by the old site, it will be given a static front page explaining the migration plans and progress, to help keep everyone current with what’s happening.

Moving to Google Code

Part of the process of moving away from my old, potentially-hard-to-maintain-in-the-future project-publishing model involved moving my local SVN repositories to Google Code, Google’s open-source-friendly project-hosting service.

Setting up the project pages and moving my existing code over there was quite easy. However, I wanted to keep local read-only repositories to avoid breaking links while I get uguu.ca set up for future development work and because I like having redundant backups of things I consider important.

I figured I should kick this blog off with a post that others might consider helpful, so I’m documenting the actions I took below. Note that this will not preserve SVN UUIDs, so you’ll need to diff and patch any dirty working copies you or other committers may have or otherwise adapt the steps to suit your needs. I’m also assuming you’ve set up sudo. If not, do everything post-committing-to-Google as a user, like root, who has permissions that will prevent other users from being able to touch the mirror repository (only svnsync should ever be allowed to write to it, and you’ll be crontabbing that).

  1. Create and reset your Google Code repository
    1. Create a Google Code project
    2. Find and follow the “reset” link at the bottom of any page under the “source” tab
  2. Migrate your existing code to Google Code
    1. svnsync init https://YOUR_PROJECT_NAME.googlecode.com/svn EXISTING_REPOSITORY
      • You’ll be prompted to provide credentials to commit to Google Code; these can be found in your profile
      • If your old repository requires credentials to be read, you may also be prompted for those
    2. svnsync sync --non-interactive https://YOUR_PROJECT_NAME.googlecode.com/svn
  3. Wait while each commit is played out; if you have mature projects, this can take quite a while. Do NOT interrupt this process or you may be left with a locked repository on Google’s end, which you’ll need to handle manually (either by clearing the lock or resetting the repository again)
  4. At this point, everything you’ve ever done has been duplicated on Google’s side, so it’s time to create the local mirror to make sure that, in the unlikely event that Google suddenly disappears, you’ll still have access to your work
    • Before proceeding, it would be a good idea to grab a fresh checkout of your code from Google’s servers and make sure it matches your working copy, using diff and ignoring the .svn subpaths.
    • When satisfied that nothing was lost in the move, you may want to delete your old repository to avoid confusion; you can’t, using this method, just have it sync against Google’s repositories as-is (UUID mismatch); additionally, after remaking the repository, the same UUID-mismatch problem will prevent committers from updating a non-master repository, so this is a good thing
  5. Create a new repository to house code mirrored from Google Code
    1. sudo svnadmin create NEW_REPOSITORY_PATH
      • NEW_REPOSITORY_PATH should be specified in absolute terms, just to avoid confusion later
    2. sudo bash -c 'echo "#!/bin/bash" > NEW_REPOSITORY_PATH/hooks/pre-revprop-change'
    3. sudo chmod u+x NEW_REPOSITORY_PATH/hooks/pre-revprop-change
      • This avoids an error normally raised to prevent accidental damage to repositories
      • The command in step 2 may look a little funny — this is because the redirect needs to be caught within the scope of sudo, so the whole thing needs to be passed as a single argument to a new bash instance
  6. Synchronize your repository with the one on Google’s servers
    1. sudo svnsync init file://NEW_REPOSITORY_PATH http://YOUR_PROJECT_NAME.googlecode.com/svn
      • Note that you’ll have something that looks like ‘file:///var/svn/my_project’ here, with three forward slashes
    2. sudo svnsync sync --non-interactive file://NEW_REPOSITORY_PATH
  7. Set up a cron job to keep things synchronized every night
    1. sudo crontab -e
      • 0 3 * * * svnsync sync --non-interactive file://NEW_REPOSITORY_PATH
        • Adding this line will make your system synchronize the repository with Google’s servers every day at 3:00 AM; the first two values are minute and hour, so play with them as you see fit
  8. Everything’s done! Your code’s now on Google’s highly accessible servers, and surrounded by Google’s excellent project-management resources, and you have a local copy for paranoia’s sake