Pretty much what Reelya said, but there is still some minimal advantage to having contiguous blocks on an SSD.
Namely, the IO pipeline has less protocol traffic, since more efficient requests can be given, assuming the OS is written sensibly. (The OS is unaware of how the blocks are actually allocated inside the SSD's flash memory array. Instead, it asks for high level blocks/sectors, as described by the MFT (for windows), or the inode chain (for Linux), or the FAT table (for legacy partitions). When the data is fragmented, it has to issue more total cumulative read request packets to the drive, which the drive then has to respond to. When the data is contiguous, and the OS knows it needs to read a large file, it can issue larger read requests to the drive, reducing the total number of requests.
We are talking at most a few microseconds of gain from this though (at the extreme), and it is a purely academic point. Random reads from an SSD are still lightyears ahead of the response speeds you get from random reads on a mechanical drive.
There might be some other, OS-level benefits to having fully contiguous blocks, such as improved performance with block deduplication, or FS level compression (such as from btrfs). Those things are not really intended for normal end users though.
The general answer is that it is not really beneficial to defragment SSDs, because the sector wear it causes is not worth the teeny tiny benefit it gives. (excepting in certain edge cases, and if you have such an edge case, YOU WILL KNOW IT.)