Xcode UTF-8 literals Xcode UTF-8 literals objective-c objective-c

Xcode UTF-8 literals


  1. I would prefer the way you did it in uni3, but sadly that is not recommended. Failing that, I would prefer the method in uni to that in uni2. Another option would be [NSString stringWithFormat:@"%C", 0x1d11e].
  2. It is a "universal character name", introduced in C99 (section 6.4.3) and imported into Objective-C as of OS X 10.5. Technically this doesn't have to give you UTF-8 (it's up to the compiler), but in practice UTF-8 is probably what you'll get.
  3. The encoding of the source code file is probably UTF-8, matching what the runtime expects, so everything happens to work. It's also possible the source file is UTF-16 or UTF-32 and the compiler is doing the Right Thing when compiling it. None the less, Apple does not recommend this.


Answers to your questions (same order):

  1. Why choose? Xcode uses C99 in default setup. Refer to the C0X draft specification 6.4.3 on Universal Character Names. See below.

  2. More technically, the @"\U0001d11e is the 32 bit Unicode code point for that character in the ISO 10646 character set.

  3. I would not count on this behavior working. You should absolutely, positively, without question have all the characters in your source file be 7 bit ASCII. For string literals, use an encoding or, preferably, a suitable external resource able to handle binary data.

Universal Character Names (from the WG14/N1256 C0X Draft which CLANG follows fairly well):

Universal Character Names may be usedin identifiers, character constants,and string literals to designatecharacters that are not in the basiccharacter set.

The universalcharacter name \Unnnnnnnn designatesthe character whose eight-digit shortidentifier (as specified by ISO/IEC10646) is nnnnnnnn) Similarly, theuniversal character name \unnnndesignates the character whosefour-digit short identifier is nnnn(and whose eight-digit shortidentifier is 0000nnnn).

Therefor, you can produce your character or string in a natural, mixed way:

char *utf8CStr =    "May all your CLEF's \xF0\x9D\x84\x9E be left like this: \U0001d11e";NSString *uni4=[[NSString alloc] initWithUTF8String:utf8CStr];

The \Unnnnnnnn form allows you to select any Unicode code point, and this is the same value as "Unicode" field at the bottom left of the Character Viewer. The direct entry of \Unnnnnnnn in the C99 source file is handled appropriately by the compiler. Note that there are only two options: \unnnn which is a 256 character offset to the default code page or \Unnnnnnnn which is the full 32 bit character of any Unicode code point. You need to pad the left with 0's if you are not using all 4 or all 8 digits or \u or \U.

The form of \xF0\x9D\x84\x9E in the same string literal is more interesting. This is inserting the raw UTF-8 encoding of the same character. Once passed to the initWithUTF8String method, but the literal and the encoded literal end up as encoded UTF-8.

It may, arguably, be a violation of 130 of section 5.1.1.2 to use raw bytes in this way. Given that a raw UTF-8 string would be encoded similarly, I think you are OK.


  1. You can write the clef character in your string literal, too:

    NSString *uni2=[[NSString alloc] initWithUTF8String:"𝄞"];
  2. The \U0001d11e matches the unicode code point for the G clef character. The UTF-32 form of a character is the same as its codepoint, so you can think of it as UTF-32 if you want to. Here's a link to the unicode tables for musical symbols.

  3. Your file probably is UTF-8. The G clef is a valid UTF8 character - check out the output from hexdump for your file:

    00  4e 53 53 74 72 69 6e 67  20 2a 75 6e 69 33 3d 40  |NSString *uni3=@|10  22 f0 9d 84 9e 22 3b 0a  20 20 4e 53 4c 6f 67 28  |"....";.  NSLog(|

    As you can see, the correct UTF-8 representation of that character is in the file right where you'd expect it. It's probably safer to use one of your other methods and try to keep the source file in the ASCII range.