Definitely not like ancient mythologies. I'm using god in the sense of a superhuman being who is, for most practical purposes, an almighty and all-knowing ruler of the universe. Not necessarily omnipotent and omniscient in an absolute sense, but relatively powerful enough and intelligent enough that it can do whatever it wants without us being able to meaningfully contest its choices. I feel that there is good reason to believe that no such being currently exists (or, at least, does not interact meaningfully with the universe) but substantially less reason to believe that such a being is physically impossible or that one couldn't come to exist. Likewise, there's no reason to believe that such a powerful being would automatically care about humans or be in any way the sort of almighty god that we want ruling over the universe. What we can say with near certainty though is that whatever god does end up existing, it probably won't end up acting like, for example, Zeus.
As for possible origins, here's the top three contenders in order of how I rate their likelihood:
1. Artificial General Intelligence is created. If humans manage to make a machine that is more intelligent than a human, that machine could go on to create a second machine that is more intelligent than it is and so on and so forth until we have a machine that is effectively all-knowing. From perfect knowledge (or at least very very very much knowledge), it's a quick step to becoming almighty at least as far as humans are concerned: an in depth understanding of human psychology/sociology or a practical implementation of molecular nanotechnology would probably suffice for the mightiness requirement. The issue here, then is making sure that the top level, "final" AGI ends up valuing human values and wanting humans to be happy. Which in turn requires that each of the predecessor stages has those values, which in turn requires that the initial superhuman AI has those values, which in turn requires that the AGI programmers, whoever they end up being, are capable of ensuring that the AI they create has those values, which in turn requires that those AGI programmers understand what exactly "human values" means on a complete, lucid and highly technical, "can program a computer to give sound moral and aesthetic advice" level. Which is a really really difficult problem that urgently needs solving if we don't want the AGI to end up like something out of Allen Ginsberg or Harlan Ellison.
2. Intentional Panspermia is true. This hypothesis suggests that some extraterrestrial intelligence intentionally placed the first self-replicating molecules on Earth and then stopped interfering and allowed evolution to take its natural course. This isn't so much a case of gods not existing, as them not intervening until some milestone they've defined is reached - like us finding them via interstellar travel, for example, or passing some arbitrary time limit until the experiment they're running is over or any number of other things. There's a variety of good reasons to suppose this is not the case (particularly the Miller–Urey abiogenesis experiment), but even it it happens to be false, the steps needed to prepare for meeting such a creator species if it exists would be excellent practice for dealing with any other extraterrestrials that happen to exist, or other non-human minds such as animal uplifts or AGIs if we ever do that. Step one on the todo list is rigorously replicating Miller–Urey so that we can be sure whether panspermia happened or not. Then we need to figure out how to effectively communicate with a completely alien mind. This is in a way the opposite of the AGI problem. Where with the AGI we need to figure out what all of the specifically human moral and aesthetic values are and how to replicate them, in this situation we're looking for arguments which are cognitively general rather than human specific. Logical arguments that will be convincing to whatever strange alien lifeforms that might exist out in space, completely divorced from our evolutionary, cultural and cognitive history. An alternative would be to go with Option 1 first: Make an AGI to protect us from our hypothetical alien creators. This is risky, in that credible attempts making an AGI might be the trigger condition for the intervention of whoever is behind the panspermia.
3. The Simulation Hypothesis is true. This is another hypothesis that doesn't appear to be true (though there's no experimental evidence, there's good epistemological reasons to reject it) but if it is that is very important to find out. Unfortunately, short of trying to communicate with whoever is running the simulation, there doesn't seem to be any way to determine experimentally if the universe is being controlled from the outside. As with Option 2, making an AGI and asking it to try talking to whoever is hypothetically behind the simulation is an option, but a dangerous one: If the "outside" universe where the simulation is being run has an AGI (or other god) of its own, that AGI might well decide to turn off our simulation before we manage to make an AGI capable of escaping it. If not, the researchers might try scuttling any AGI attempts. Given how unlikely this situation is to be true and how little we can do about it if it were, there's not much sense in worrying about this case.