Only if it’s enabled by default, or the dev knows to enable it.
I had a lot of weird problems processing some info with names in Powershell until I found out that Powershell doesn’t default to unicode format when shoving output into files. You can easily specify the encoding, but if you don’t it replaces any non-ascii characters with “?” by default, so it’s not even immediately obvious that there’s an incorrect character, as it just silently substitutes a valid one.
Can’t confirm that. In the 90s encodings were a nightmare. ISO-8859-1, ISO-8859-15, CP1252, IBM850, …
If you tried to build a website with an upload form, you’d get the most bizarre encodings and there was no way to reliably distinguish them.
I’m not an English native, my world is full of umlauts and s-z ligatures. Things got A LOT better in the last years, thanks to Unicode encodings.
Still needs to be widely used. It took me about an hour to figure out that my encoding issues were because of Vim being in latin1, another to figure out how to change that, and a third to realize that screen also wasn’t in UTF-8 mode.
These errors were much more common before Unicode encodings were in broad use. Unicode pretty much solved this.
Still happens for new emoji on old OSs, or just missing characters in the font being used.
exacgly today these errors is always because some old emojis
Only if it’s enabled by default, or the dev knows to enable it.
I had a lot of weird problems processing some info with names in Powershell until I found out that Powershell doesn’t default to unicode format when shoving output into files. You can easily specify the encoding, but if you don’t it replaces any non-ascii characters with “?” by default, so it’s not even immediately obvious that there’s an incorrect character, as it just silently substitutes a valid one.
it uses big-endian utf-16 with BOM by default unless you upgrade to PowerShell 7
I like your enthusiasm. I remember when I believed the same. The last 16 years have clearly shown this is not the case.
No it hasn’t. It has just pushed them out of sight for English natives.
Can’t confirm that. In the 90s encodings were a nightmare. ISO-8859-1, ISO-8859-15, CP1252, IBM850, … If you tried to build a website with an upload form, you’d get the most bizarre encodings and there was no way to reliably distinguish them. I’m not an English native, my world is full of umlauts and s-z ligatures. Things got A LOT better in the last years, thanks to Unicode encodings.
Still needs to be widely used. It took me about an hour to figure out that my encoding issues were because of Vim being in latin1, another to figure out how to change that, and a third to realize that screen also wasn’t in UTF-8 mode.