what?
A hardware interrupt is exactly THIS:
Hardware has a direct line to the CPU, which it can raise to say 'HEY! Pay attention to me and what *I* need to get done!" That's a hardware interrupt. x86 based computers have historically had 8 interrupts in old 8bit computers, then 16 interrupts in 16bit and 32bit designs. The new 64bit system expanded this further.
The smaller the number on the interrupt, the more prioritized it is. Interrupt 0 is the system timer's RTC, which gives the timing signal for the CPU's instruction cycle. Etc.
Since the PCI era, peripheral cards are no longer tied directly to the CPU, like they were with VESA, EISA, ISA, and MCA architectures. Instead, the PCI bus itself has a series of PCI interrupts that talk to the PCI bridge controller, which THEN talks to the CPU. This allows cards on the PCI bus to raise the bridge controller, which can do stuff for the CPU without interrupting the CPU. Things like hard disk IO transfers, copying memory in and out of expansion card memory windows, etc. These cards get allocated a PCI interrupt, and when the PCI bridge needs to signal the CPU to stop what it is doing, the PCI bridge controller raises the real hardware interrupt. PCIe is an evolution on the PCI bus concept, and works similarly. (It used to be, back in the ancient days of the 8086 and pals, that disk access was basically a dance between the hard disk interface card sitting historically on IRQ7 or IRQ15, and communicating using memory address region D8000 or C8000. The CPU would send messages to the hard disk controller using its IO address, the card would fetch data from the disk, present it at the memory address above, then raise its IRQ to tell the CPU the fetch was complete. It would do this for every read or write operation, which is why disk access would slow the computer to a crawl. The CPU was CONSTANTLY being interrupted. That changed with bus-mastering disk controllers, that could be instructed to fetch multiple words from the disk, then instead of presenting it to the CPU through a tiny window and forcing the CPU to do the memory copy operation to get data in and out-- the bus mastering disk controller could grab ahold of a chunk of system memory all by itself (when instructed to do so by the CPU beforehand), and populate it with the requested data, leaving the CPU free to do whatever. That's where the DMA mode disk IO schemes came into being. It started with 16bit ISA DMA, and has been improved upon ever since-- but true bus-mastering disk IO only came into being on MCA, VLB, and PCI buses. On the PCI bus, the PCI bridge controller can move around huge chunks of memory to and from the disk without ever involvling the CPU at all! Hundreds of megabytes even!)
It sounds to me more that you had an incorrectly configured PCI card configuration, which caused IRQ conflicts. Those used to happen ALL THE DAMN TIME back in the 286 and 386 era, but stopped in the post 486 era with the vastly improved PCI bridge controllers.
However, it STILL happens on the PCI bus itself.
The way PCI works, is-- as I said-- the bus controller has its own private set of IRQs that it uses to communicate with card slots. Each slot has a PCI IRQ associated with it. By specification, there are 4 PCI IRQs. A, B, C, and D. If you have more than 4 slots in your machine, then some slots are sharing PCI IRQs. This INCLUDES things like the built-in USB port controller, the built-in soundcard, the built-in ethernet card-- etc. The PCI bridge assigns a real hardware IRQ to each of its PCI IRQs, and then works as a kind of proxy/traffic warden.
However, if your device likes to own that IRQ, it can and will cause problems with other devices sharing that PCI IRQ. Chronic offenders in the past have been serial ports/modems, NICs, USB controllers, and HDD interfaces. Cards that had to try to share with them were usually soundcards, LPT ports, PS/2 mouse and keyboard port, and video cards. Contentions on the PCI IRQ dont cause the system to hang and refuse to power on, like hardwired IRQ conflicts back in the stone age did. Instead, they cause the PCI bus controller to have to work VERY hard to synchronize the traffic on the bus, causing some cards to wait while the other is talking, etc. For devices that demand immediate service, this causes the system to stutter or act "Goofy".
The PCI bridge controllers have improved TREMENDOUSLY since their first introduction in the early 90s-- Back then, you HAD to know about PCI IRQs, and be mindful about what slots you put what cards into, to have your system not act like it had a concrete enema. These days, you can just about jam any old card into any old slot, and have a system that works reliably.
If you are experiencing goofy behavior, consider freeing up the PCI IRQ that is being conflicted with by other cards, by moving a card or two into another unpopulated slot, and leaving the old slot unpopulated.
But hardware interrupts being a source of permanent damage? Not unless your PCI bridge controller got toasted trying to field all the traffic control it was having to do! That kind of thing is ancient history these days! Seriously, just move your sound card into a different slot. Just try it.