News Flash - Exploiting Software Defects for Profit: Still Illegal

A story ran in the Pittsburgh Post-Gazette on Tuesday that triggered some interesting conversation.


"Moments before he was to stand trial for bilking The Meadows Racetrack and Casino out of nearly a half-million dollars in fraudulent jackpots, a Swissvale man was arrested Monday by federal authorities, who say he actually may have stolen as much as $1.4 million from casinos in the U.S. and abroad."


Wow.  The story gets better.  Apparently through a combination of social engineering of casino floor workers, and "a software glitch" (affectionately referred to as a bug) this group of people was able to steal some very real money.  The short of is that they were caught because they got greedy, as they always do.


"When the correct sequence of buttons was pushed, the machine displayed false double jackpots. No casino officials noticed because the bogus jackpots weren't being recorded in the machine's internal system."


So, am I the only one that reads this "glitch" as a potentially planted bug in the system?  I have to admit, if there was a bug that gave random winners then that would be a glitch, but a "glitch" that's triggered by a specific sequence of buttons and not logged is a planted bug, period.  Someone needs to open an investigation into the company that makes that machine, and whether they're profiting from poor QA process!  Speaking of QA process integration (see my last post and others) wow... what kind of failure was this?


So I'm thinking that this smells a little funny to me.  Are casino machines really that software-glitchy?  Is there really such poor quality control (let's face it this isn't a security issue!) that these bugs make it out the door and onto the casino floor?  I think I have more questions than answers here... for example - how did this group find out about these "software glitches"?


I don't know about you, readers, but I think there is more to this story than meets the eye or press - and the "villains" are being brought up on Federal charges to keep them from talking.  I suspect this story won't get properly investigated, and buried.



(anon) | ‎01-07-2011 07:36 PM

A planted bug of this nature is the kind of thing that black box testing techniques would never catch, but static code analysis could identify very quickly.


I would be very curious to see how this played out in terms of vendor management, sub-contracted software development and audit controls. It's entirely possible that the manufacturer of the slot machines relied on one or more contracted firms to develop some or all of the embedded software running inside of them. Portions of that software, in turn, could've been sub-contracted out to other firms, or purchased as standalone modules. Likewise, testing could've been contracted out to third parties without access to source code, or even detailed specs. In such a scenario, the company selling the slot machines might not have any real handle on what level of testing actually occurred, beyond the false sense of security (read CYA) provided by a signed contract.


Even if all of the software development was done in-house, it's also possible that the group responsible for acceptance testing of the machines was organizationally segregated from the group responsible for writing the software. Whatever caused it though -- as you've said -- just boils down to poor QC.


I wonder how much $$$ was on the line there, and what might be going on behind closed doors, in order for the Feds to swoop in like that.

Rafal Los (Wh1t3Rabbit) | ‎01-08-2011 04:48 PM


 While I think you raise an interesting point, I think you're wrong on how this could have potentially been caught.  Dynamic analysis in the way we normally think of it in the web app world would probably not find this condition, if it was well-hidden - however - run-time analysis in the form of a program trace of the running code (a variation of dynamic analysis) could have uncovered a code branch that was dormant (unused) during usage cases.  In that case, the testing system should have triggered a deeper inspection of the code in that particular module, thus catching the bug (planted trojan).


As far as source code analysis goes, Brian Chess and Jacob West (of HP Fortify) did  a talk about this at RSA I believe back in 2007 although I can't find the link right now.  While by no means would I consider it "easy" I think there are things that static analysis certainly can do to find such backdoors or "booby-trapped code".


Either way, this is clearly a failure in QA as we've both agreed upon - thanks for the input!

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
About the Author

Follow Us
Community Announcements
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation