Are some characters somehow more, uh, byte-consuming than others?
Abso-fucking-loutly. Right click on the page and click Page Info... (since you're using Firefox). See the Encoding item? It will show that the page is encoded in UTF-8. What is that? Well, under Unicode, characters don't have a specific binary representation. Instead, familiar characters such as a have code points such as U+0041. These code points wind up being represented differently depending on which of the seven Unicode encoding formats is used. The most common is UTF-8, which maps characters with to strings of bytes. For many common Latin-1 points, these mappings are exactly the same as for ASCII. For more exotic (again, from the stand point of Latin-1) characters, their code points map to two to four-byte strings. In some rare cases (those characters with code points above U+FFFF), they can be mapped to over four bytes.
In comparison, under UTF-16, all characters have two-byte representations at least. For code points above U+FFFF, more are be used. In UTF-32 (almost unheard of), all characters are four bytes long.
In short, as long as you're below U+0070 and are using UTF-8, there should be no difference between characters.