I've worked in (but non-clinically) the pharma-study industry and know a little of the process. If it's as the BBC link says (just seen, edited in after I'd read the original post content) then this one study is far from 'proving' the efficacy.
If the percentages were reversed, it wouldn't be distinguishable from chance aberation[1], and if the trial is preventing other interventions[2] then I'm not surprised it was stopped early.
This doesn't sound the death-knell for the drug, but it's a datum-point that needs not to be forgotten for the meta-analysis if/when more favourable trial results come back from elsewhere. It's a stringent world out there (at least post-thalidomide) and if the company behind it are just trying to say "it's not as bad as you seem to assume" then I'd agree with them. If they're trying to say "it's better than it looks", then they need to have other preliminary study results in their back pocket (needing a final step or two of signing-off before publishing) to justify their words.
On the pricing issue, I'm aware(ish) of how expensive drug development (and then regulatory hurdle-jumping) can be, but that markup looks excessive. A little-used drug (but important, where it is, so can't be quietly let fallow) that has eaten up much of its patent-protection time in the run-up to full approval may have to be priced up to recoup development/etc costs before it becomes subject to Generic competitors rushing in. But with the "great white hype" of this one, its immediacy/scale of likely need and the probability that there'd be a willing rush of buy-ins to procure "pre-generic" generic-production if the owner can't serve all the demand, that pricepoint seems self-defeating. As if they're aiming at the "miracle cure" believers in the consumer market before the belief-bubble bursts. (i.e., they don't have any useful pre-preprint studies in their back pocket.)
Or there's tricksier shenanigans going on, but I don't have any reason to believe that. Much as I've spent zero time looking for history on the product[3], so some of my other speculation could also be a bit off.
[1] Which might mean they had the bad luck to see a bad roll of adverse events flipping an actually useful outcome, rather than a good roll of chance improvements in the right cohort masquerading as what they were hoping to see. But to claim this would unsupportable.
[2] Larger studies can probably allow freedom to vary, outside the core treatment/control dichotomy, but this (probably Phase II, and IIa at that, combined with the immediately life-threatening condition) would have to be constrained as you can't then eke out enough match-for-match equivalent confounding factors across the divide.
[3] I presume it's not come out of the blue, probably developed for/into another pathology and thus far closer to end-of-patent than a novel Phase I graduate would be, but I'm well out of the clinical study field, these days, and am content to not look it up, or recontact old colleagues to see if they know anything.