Edit: This post refers to a time before we implemented UNICODE in Spike2 version 8. Both Spike2 and Signal now use the Unicode character set; the transition seems to have been pretty painless (at least we have not had any complaints!)
Currently Spike2 (and Signal) deal with text internally as 8-bit ASCII characters. The characters with codes 0-127 have fixed meanings, and characters with codes 128-255 have meanings that depend on the "code page" that is set in the operating system. These extra characters in the code page allow you to use non-ASCII national characters in scripts for comments and in literal strings ("string"). However, if text that is encoded for one code page is sent to a user using a different one, the result is a mess (except for ASCII characters in the range 0-127). Worse, if you use a languge that requires many thousand characters (Chinese or Japanese, for example), you have little hope of success.
We are experimenting with changing over to using a UNICODE-based system (an international standard system that allows around 1 million different characters). If we do this, then users in China will be able to type comments and strings in scripts in Chinese and if they send such a script to me in England, the characters will still be correct (as long as I install the right language support), regardless of my local code page. This will also allow the use of special characters (pi, the degree sign, etc).
It is relatively easy for us to allow you to type comments into your scripts in your local language, as the text editor we use can work with UTF-8 as well as with the local code page. Simple minded use of this also allows us to put such text into script strings, and the text will display correctly if the output is sent to the log view, for example. However, if you use this to set a channel title, for example, this will display as rubbish characters in a time view. To get these to display correctly means a vastly bigger task of converting the entire program to use UNICODE.
The reason I am writing this is to ask you if you are already making use of the local code page to encode text other than ASCII characters. For example, if you are in Europe, are you using non-ASCII national characters (ø, à and the like)? If you are in Japan, are you writing using ひらがな or in 日本語? Do you write comments in Chinese or Korean? It may be that these characters do not display in your browser...
If no-one is doing this, it would allow us to switch very quickly to using UTF-8 encoding for the script system (so you can write comments in your local language system), with a slower transition to full UNICODE mode throughout the program. However, if a lot of people are already using code page based national characters, particularly if they are using them in time and result views and in tool bars and in user-defined dialogs, then we cannot make this change until the entire program is converted.
Please respond to the poll to let us know what you do (or would like to do).