How can I get the Unicode code point(s) of a Character? How can I get the Unicode code point(s) of a Character? swift swift

How can I get the Unicode code point(s) of a Character?


From what I can gather in the documentation, they want you to get Character values from a String because it gives context. Is this Character encoded with UTF8, UTF16, or 21-bit code points (scalars)?

If you look at how a Character is defined in the Swift framework, it is actually an enum value. This is probably done due to the various representations from String.utf8, String.utf16, and String.unicodeScalars.

It seems they do not expect you to work with Character values but rather Strings and you as the programmer decide how to get these from the String itself, allowing encoding to be preserved.

That said, if you need to get the code points in a concise manner, I would recommend an extension like such:

extension Character{    func unicodeScalarCodePoint() -> UInt32    {        let characterString = String(self)        let scalars = characterString.unicodeScalars        return scalars[scalars.startIndex].value    }}

Then you can use it like so:

let char : Character = "A"char.unicodeScalarCodePoint()

In summary, string and character encoding is a tricky thing when you factor in all the possibilities. In order to allow each possibility to be represented, they went with this scheme.

Also remember this is a 1.0 release, I'm sure they will expand Swift's syntactical sugar soon.


I think there are some misunderstandings about the Unicode. Unicode itself is NOT an encoding, it does not transform any grapheme clusters (or "Characters" from human reading respect) into any sort of binary sequence. The Unicode is just a big table which collects all the grapheme clusters used by all languages on Earth (unofficially also includes the Klingon). Those grapheme clusters are organized and indexed by the code points (a 21-bit number in swift, and looks like U+D800). You can find where the character you are looking for in the big Unicode table by using the code points

Meanwhile, the protocol called UTF8, UTF16, UTF32 is actually encodings. Yes, there are more than one ways to encode the Unicode characters into binary sequences. Using which protocol depends on the project you are working, but most of the web page is encoded by UTF-8 (you can actually check it now).

Concept 1: The Unicode point is called the Unicode Scalar in Swift

A Unicode scalar is any Unicode code point in the range U+0000 to U+D7FF inclusive or U+E000 to U+10FFFF inclusive. Unicode scalars do not include the Unicode surrogate pair code points, which are the code points in the range U+D800 to U+DFFF inclusive.

Concept 2: The Code Unit is the abstract representation of the encoding.

Consider the following code snippet

let theCat = "Cat!🐱"for char in theCat.utf8 {    print("\(char) ", terminator: "") //Code Unit of each grapheme cluster for the UFT8 encoding}print("")for char in theCat.utf8 {    print("\(String(char, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF8 encoding}print("")for char in theCat.utf16 {    print("\(char) ", terminator: "") //Code Unit of each grapheme cluster for the UFT-16 encoding}print("")for char in theCat.utf16 {    print("\(String(char, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF-16 encoding}print("")for char in theCat.unicodeScalars {    print("\(char.value) ", terminator: "") //Code Unit of each grapheme cluster for the UFT-32 encoding}print("")for char in theCat.unicodeScalars {    print("\(String(char.value, radix: 2)) ", terminator: "") //Encoding of each grapheme cluster for the UTF-32 encoding}

Abstract representation means: Code unit is written by the base-10 number (decimal number) it equals to the base-2 encoding (binary sequence). Encoding is made for the machines, Code Unit is more for humans, it is easy to read than binary sequences.

Concept 3: A character may have different Unicode point(s). It depends on how the character is contracted by what grapheme clusters, (this is why I said "Characters" from human reading respect in the beginning)

consider the following code snippet

let precomposed: String = "\u{D55C}"let decomposed: String = "\u{1112}\u{1161}\u{11AB}" print(precomposed.characters.count) // print "1"print(decomposed.characters.count) // print "1" => Character != grapheme clusterprint(precomposed) //print "한"print(decomposed) //print "한"

The character precomposed and decomposed is visually and linguistically equal, But they have different Unicode point and different code unit if they encoded by the same encoding protocol (see the following example)

for preCha in precomposed.utf16 {    print("\(preCha) ", terminator: "") //print 55357 56374 128054 54620}print("")for deCha in decomposed.utf16 {    print("\(deCha) ", terminator: "") //print 4370 4449 4523}

Extra example

var word = "cafe"print("the number of characters in \(word) is \(word.characters.count)")word += "\u{301}"print("the number of characters in \(word) is \(word.characters.count)")

Summary: Code Points, A.k.a the position index of the characters in Unicode, has nothing to do with UTF-8, UTF-16 and UTF-32 encoding schemes.

Further Readings:

http://www.joelonsoftware.com/articles/Unicode.html

http://kunststube.net/encoding/

https://www.mikeash.com/pyblog/friday-qa-2015-11-06-why-is-swifts-string-api-so-hard.html


I think the issue is that Character doesn't represent a Unicode code point. It represents a "Unicode grapheme cluster", which can consist of multiple code points.

Instead, UnicodeScalar represents a Unicode code point.