So, on the topic of machine learning, here's my own speculation on the future (although I don't know all that much about such things)
From what I've gathered, as it stands no A.I can think about, let alone act upon anything outside of it's programming, although it can do things that the humans that made/use/run/whatever it didn't intend for it to do if it is something that helps it achieve it's programmed goal, that A.I they taught to play tetris that paused the game instead of losing that wasn't even supposed to "know" about the pause function, and that A.I they had speedrunning super mario bros that accidentally discovered a new glitch that no one knew about and then decided "hey, this works better" and started using that both come to mind. But theoretically, couldn't a human-level A.I (as in one capable of actual thinking, contemplating and would be considered sentient) do things it's programming doesn't tell it to do, after all, humans do things nature never intended for us to do all the time, such as make A.I, go to the moon, build houses, drive cars, basically everything in modern life are things not written into genetic code/instinct and also most don't happen in any other animal, and the ones that DO happen in other animals are usually A. Somewhat instinctual (Those species of ants that build bridges and other things with their own bodies come to mind here, I'd guess that the one ant species that feeds mushrooms and harvests the energy they produce would fall here, although I don't know enough about either of them to say for sure,) B. Happening in the more intelligent animals (Such as the other apes, crows and octopi (which IIRC also have only a 24-hour memory span, and so have to relearn things every day (although I'm not sure about that fact either) I've also heard that they don't seem to have emotion) or C. Have to be taught by humans although whether they pass that information down to their young seems to vary.