If temperature matters so much, explain how with the same gpu the Asus card can't maintain its 1120 MHz boost at 71C but the MSI maintains 1120 MHz even at 77C? How come the lower-temperature one (theoretically meaning a better cooler) is running worse? Will the card throttle itself back for no apparent reason even if you manually overclock it? The answer would seem to be that they have different thresholds for their "boost", but try to find that documented anywhere, even though it would seem to be rather important to its performance (if you care about a few percent).
As for a cooler card lasting longer... It's completely unmeasurable and incalculable, and therefore meaningless! Answer me this: If I take a stock 270X (which I found a review saying it runs at 76C under load) and stick it in a shitty case with terrible airflow to the point it will run at 95C under load (the throttle-back threshold for the R9) how long will it last? 1 day less? Half as long? Only 1 day?
Choose the card with the longest guarantee if you're after long life. If that card happens to have a better cooler (it may well) win-win!
I used to overclock, back when you could do things like alter the FSB on an 1866MHz AMD 2500+ cpu from 166MHz to 200MHz to transform it into a top-of-the-range 2200MHz 3200+ cpu (complete with name, weirdly) for a 20% boost, or get an XP-m chip and push it even higher (e.g. I have records of pushing a 2500-m to 2400MHz or a 2400-m to 2580 MHz, which was faster than any cpu that could be bought at the time, 17% faster than even the top-of-the range 3200+). I have benchmark records of running an nVidia 8800 GTS at 650MHz core instead of 500, a 30% overclock. That kind of stuff mattered.
These days, thanks to the extended last console generation and everything being optimised for those... even my aging GTX 285 runs everything, so why bother? I'd bet even the R9 270 at stock outperforms the xbox one.