Hardly, I’m directly addressing your statement that case insensitive is intuitive to users, grandmas or otherwise - I give examples where it’s not initiative or obvious which filenames match. I didn’t mention ease of implementation at all.
The principle of least surprise is an important UX consideration, and your idea of effectively introducing collation and localising which files conflict is just trading one problem for another set of problems and suprises (e.g. copying directories between drives with different settings).
No, it’s not. You’re substituting a base use case for an edge case and pretending they are on the same order when it comes to UX. They are not. File localization and mixing and matching alphabets in filenames is NOT the same as case sensitivity and using cases (or spaces, if we want to roll this conversation back a couple decades and talk about an actual implementation mess) in filesystems. Security and stability care about edge cases, it’s weird that you try to flex by name dropping “principle of least surprise” and then pretend that a problem impacting every single user who types a filename is the same impact on that than a user mixing and matching alphabets on multiple cases. ESPECIALLY when your example requires making the conscious decision that equivalent characters across alphabets is equivalent to case sensitivity, which is not a given at all.
Oh, and it’s not my idea. Default Windows and Mac FSs are case insensitive, legacy FAT systems are case insensitive. If the issue is standardization across systems, case sensistivity is the odd one out. If you’re having issues mixing and matching drives in older supported case-insensitive FSs the blanket fix for that is not having a case sensitive system elsewhere for no particularly good reason. I mean, speaking of minimizing surprise…
Hardly, I’m directly addressing your statement that case insensitive is intuitive to users, grandmas or otherwise - I give examples where it’s not initiative or obvious which filenames match. I didn’t mention ease of implementation at all.
The principle of least surprise is an important UX consideration, and your idea of effectively introducing collation and localising which files conflict is just trading one problem for another set of problems and suprises (e.g. copying directories between drives with different settings).
No, it’s not. You’re substituting a base use case for an edge case and pretending they are on the same order when it comes to UX. They are not. File localization and mixing and matching alphabets in filenames is NOT the same as case sensitivity and using cases (or spaces, if we want to roll this conversation back a couple decades and talk about an actual implementation mess) in filesystems. Security and stability care about edge cases, it’s weird that you try to flex by name dropping “principle of least surprise” and then pretend that a problem impacting every single user who types a filename is the same impact on that than a user mixing and matching alphabets on multiple cases. ESPECIALLY when your example requires making the conscious decision that equivalent characters across alphabets is equivalent to case sensitivity, which is not a given at all.
Oh, and it’s not my idea. Default Windows and Mac FSs are case insensitive, legacy FAT systems are case insensitive. If the issue is standardization across systems, case sensistivity is the odd one out. If you’re having issues mixing and matching drives in older supported case-insensitive FSs the blanket fix for that is not having a case sensitive system elsewhere for no particularly good reason. I mean, speaking of minimizing surprise…