The problem is that sdcard spec specifically mandates exfat, abd exfat lacks essential features. (For a modern OS)
What it DOES do, is allow absurdly huge cluster sizes. (In the multiple megabyte ranges) which is what allows a factory formatted sdcard to not die extremely quickly from write amplification.
The SDCard Assn pointedly and purposefully does not standardize erase block sizes. This is why Raspbian does not have "industry defaults!" to fall back on, and why they dont set the extended ext4fs options to get erase-block size atomic writes.
Ext4fs is tailored for modern spinny disks, and uses a 4k cluster size, fixed.
The filesystem driver will queue up sectors to write in a software buffer until the appropriate atomic write size is reached, then commit them in batches-- when the raid options are enabled. This is intended for efficient writes across an array, but a flash disk is best viewed as an interleaved raid-0 LUN. This is why the raid feature works so well to maximize life and IOP thruput to a flash based single disk.
SDCards especially, can change radically from one production run to the next, based entirely on commodity component pricing and availability, which is precisely why the assn does not standardize erase block configurations, and instead focuses on the filesystem used: as long as the chunks are consistent, the manufacturer can use whatever arrangement they want, format the device accordingly, and spec compliant devices will commit chunks of that size. Medium stays happy, healthy, and speedy.
This is why the easiest way to learn the eraseblock size is just to get the exfat cluster size used by the factory.
Otherwise, you have to use flashbench, and voodoo.
The volatility between production runs means that yku cannot just say "its a 256gb Transend", and have known block sizes. The actual flash chip and the microcontroller that services it can be any combination of dozens of offerings, and still be "256gb Transend".
SSDs tend to be vastly more consistent, and manufacturers are less guarded about what is actually in there driving it.
Rules of thumb for SDCards:
Look at the size of the gap at the start of the disk, before the start of the first partition, on the factory format.
Look at the cluster size of the ExFAT filesystem of the factory format.
These will tell you the erase block and page sizes, respectively.
(The card manufacturer wants to avoid having the partition table and primary FAT get compromised by excessive writes, so they pad THE ENTIRE ERASE UNIT that these structures live in, by setting the start of the partition that size apart from the start of the volume, on the very beginning of the NEXT block. Tjis is why the gap reveals the page size.
The micro controller that controls the flash array itself, has a finite amount of RAM it can use to do operations on that array with. This is the page size. The manufacturer optimizes the exfat filesystem to work in chunks of this size, so the controller does not waste iop cycles. This is especially true for "high speed" SDCards.)
What the "Stride" and "Stripe-Width" parameters do, is define how much data to write across an array to optimally write on all disks equally, and what size increments to send to the array controller to maximize card cache use efficiency. These are directly analogous to the erase block, and page size. Thats why it works.
In terms of a real RTC inside a Pi, NO. It does not have one. The SoC it leverages, must be configured on each and every boot. This is one of the many reasons for the Pi-Specific pi config file.
It is also why the Pi must initially boot from the SDCard. That boot process COULD be chainloaded, in a manner similar to other embedded devices with uboot and pals, but it would not give much utility, and would still need to be protected/managed by the OS.