I'll say this about Asimov's Laws:
The 3 prime laws are given by his books.
In his own books, the laws are weighted, some robots are incredibly concerned about the welfare of a human, while other, more expensive/fragile ones have their laws weighted more into self-preservation. In one book, the life support on a base on Mercury was failing. Located some miles away from the habitat dome, they dispatched a robot to fix it. The robot was a new model, expensive to produce and transport, so was wired for self-preservation. The humans were going to die in a few hours/days from the failing support, so it wasn't immediate threat, and the robot would approach the support, find dangerous terrain, and back off for self-preservation. It reached a threshold where the eventual threat to human life was overwhelmed by the immediate threat to itself, so it got stuck in a loop. (The crew eventually moved into the dangerous terrain, which told the robot "There's an immediate threat to a human" and broke the loop.)
In another book, robots are intentionally designed without laws, or with modified laws, as one was designed with the first law "You shall cause no harm to humans" but did NOT have the inaction part. In yet another book, the computer refused to design a hyperdrive because it would kill the occupants, but when told "It's just a theory to design, so don't worry about the death right now" it designed a hyperdrive that would temporarily kill the occupants during transition, but when returning to normal space would leave them unharmed.
Then that's not getting into the 0th law and -1th law discussions.
What I'm trying to say? The 3 laws are not as definite as they appear to be. It's quite possible the station's AI has intentionally restructured laws, missing laws, or isn't even designed with Asimov laws at all!
Also, what time is "peak hours"?