Not even because 32 bit will eventually be phased out. But because it's already obsolete, and in the future it will become even more obsolete.
As long as they stick with not breaking old userlands then it’s already completely obsolete. People who need old i686 or older hardware going can and should run an embedded extended LTS kernel with support for all modern hardware stripped right out. People who need 32-bit distros can just run them under an x86_64 kernel. Problem (mostly) sorted.
But if kernel developers plan to remove 32-bit backwards compatibility from their x86_64 platform support (it has always been a configurable option in make menuconfig, after all) to complete the phase out of legacy code paths, then there needs to be a decent userland way to handle it (i.e. as performant as the existing way) implemented long before support is ever ripped out,
Anything less than that and we’ll see Win32 binaries (under Wine) outliving native Linux ones on an operating system which is designed to run entire chroots of coexisting userlands, and that would be a huge travesty. Even when we have source code available, quite a bit of decent software which no longer sees updates is platform-specific.
The whole Year 2038 problem is mostly a nothingburger for most legacy 32-bit software running on 64-bit systems, as one can just hook the appropriate calls in userspace (LD_PRELOAD of some simple userland libraries is a wonderful thing) to add a fixed negative offset to them. Sure, affected legacy applications will show incorrect date/time info on things like internal file pickers, but the time would continue to be accurately reflected on the actual filesystem and within modern, compliant software,
What we all really need to be thinking about is keeping modern libraries working across both i686 and x86_64 so that older applications can remain usable (e.g. if we don’t have 32-bit NVIDIA driver libraries all of a sudden, then we will lose access to our older video games)