NYSE: Bad software rollout – not hackers – took out the Stock Exchange

Last Wednesday, trading was halted on the New York Stock Exchange.

You can probably guess what happened next, right? That’s right…

[spoiler title=”Click here to see how the internet calmly reacted”] PANIC!!!!
[/spoiler]

Inevitably, some people wondered out loud whether the NYSE’s downtime (especially when it came on the same day that United Airlines flights were grounded and the Wall Street Journal’s website was disrupted) might be the result of a malicious attack by evil hacker masterminds.

NYSESome even went so far as to specifically blame Chinese hackers.

Sign up to our free newsletter.
Security news, advice, and tips.

Even the shy-and-retiring John McAfee was unafraid to put his cards on the table and publicly assert that Anonymous was to blame, describing the likelihood of the three events happening independently on the same day as “one in a billion”.

Well, I wonder if John McAfee plays the lottery…

Because the NYSE has published a statement on its website, explaining that the problems were not due to a hacker attack, but instead a plain-and-simple configuration problem with a new version of software rolled out on customer gateways.

NYSE statement

Here is the text of the NYSE’s explanation of the outage in full:

On Tuesday evening, the NYSE began the rollout of a software release in preparation for the July 11 industry test of the upcoming SIP timestamp requirement. As is standard NYSE practice, the initial release was deployed on one trading unit. As customers began connecting after 7am on Wednesday morning, there were communication issues between customer gateways and the trading unit with the new release. It was determined that the NYSE and NYSE MKT customer gateways were not loaded with the proper configuration compatible with the new release.

Prior to the market open, gateways were updated with the correct version of software and stocks opened at 9:30am. However, the update to the gateways caused additional communication issues between the gateways and trading units, which began to manifest themselves mid-morning. At 11:09am, NYSE issued a Market Status message that a technical issue was being investigated. At 11:32am, because NYSE and NYSE MKT were actively trading but customers were still reporting unusual system behavior, the decision was made to suspend trading on NYSE and NYSE MKT. NYSE ARCA, Arca Options and NYSE AMEX Options were not impacted by this event and continued to trade normally.

NYSE and NYSE MKT began the process of canceling all open orders, working with customers to reconcile orders and trades, restarting all customer gateways and failing over to back-up trading units located in our Mahwah, NJ datacenter so trading could be resumed in a normal state. In consultation with regulators and industry, we determined that we would implement a complete restart and that NYSE MKT primary listings would resume trading at 3:05pm and NYSE primary listings and NYSE MKT Tape C symbols would resume trading at 3:10pm. Trading resumed as scheduled and the closing auctions accepted orders and executed normally. All NYSE and NYSE-MKT listed securities traded for the entire day either on NYSE and NYSE MKT or other market centers.

Like I said at the time, “the NYSE could be suffering from a technical glitch that has nothing to do with hoody-wearing hackers”.

Quite often people jump to the assumption that malware or online criminals are to blame, rather than a less-exciting explanation.


Graham Cluley is an award-winning keynote speaker who has given presentations around the world about cybersecurity, hackers, and online privacy. A veteran of the computer security industry since the early 1990s, he wrote the first ever version of Dr Solomon's Anti-Virus Toolkit for Windows, makes regular media appearances, and is the co-host of the popular "The AI Fix" and "Smashing Security" podcasts. Follow him on Bluesky and Mastodon, or drop him an email.

3 comments on “NYSE: Bad software rollout – not hackers – took out the Stock Exchange”

  1. drsolly
  2. zen girl

    OK so they are blaming it in the leap second?
    Seriously, who performs an update in the middle of the week on such a crucial system without thoroughly testing it.

    And you forgot to mention that yet another set of fiber optic cable was cut in Silicon Valley. Was that part of the "glitch" too?

    It is much easier for the sheeple mind to say "ah yeah ok a glitch, I get those". than it is for them to open their eyes to the reality of how fragile our systems really are.

    1. Coyote · in reply to zen girl

      You're under the false belief that it works that way; it doesn't. Programmers are humans and humans aren't perfect. This is normal. No matter what you think of in advance there will at times be some thing(s) that you miss (or some other variable appears that then changes things drastically). And it so happens that those changes might be an additional feature (or fix to another problem), or even someone else looking at it and they think they see a bug, 'fix' it only to create problems. NO AMOUNT OF TESTING WILL FIND ALL PROBLEMS 100% OF THE TIME! Whether critics of programmers recognise this as truth or not doesn't change the reality. What does Silicon Valley have to do with this? As for the leap second – it happened on June 30 (the last time was 2012/06/30); SIP timestamp refers to something else entirely (which might just be why they didn't refer to the 'leap second' that happened some days before this incident).

      As for your final remark. Admitting it as an error on their part is equivalent to them facing the reality of it all. It is just that you seem to think that it has to do with the technology when it really has to do with mistakes (which means the two are equivalent).

What do you think? Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.