打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
codecs

codecs — Codec registry and base classes

Source code: Lib/codecs.py


This module defines base classes for standard Python codecs (encoders anddecoders) and provides access to the internal Python codec registry, whichmanages the codec and error handling lookup process. Most standard codecsare text encodings, which encode text to bytes,but there are also codecs provided that encode text to text, and bytes tobytes. Custom codecs may encode and decode between arbitrary types, but somemodule features are restricted to use specifically withtext encodings, or with codecs that encode tobytes.

The module defines the following functions for encoding and decoding withany codec:

codecs.encode(obj, encoding='utf-8', errors='strict')

Encodes obj using the codec registered for encoding.

Errors may be given to set the desired error handling scheme. Thedefault error handler is 'strict' meaning that encoding errors raiseValueError (or a more codec specific subclass, such asUnicodeEncodeError). Refer to Codec Base Classes for moreinformation on codec error handling.

codecs.decode(obj, encoding='utf-8', errors='strict')

Decodes obj using the codec registered for encoding.

Errors may be given to set the desired error handling scheme. Thedefault error handler is 'strict' meaning that decoding errors raiseValueError (or a more codec specific subclass, such asUnicodeDecodeError). Refer to Codec Base Classes for moreinformation on codec error handling.

The full details for each codec can also be looked up directly:

codecs.lookup(encoding)

Looks up the codec info in the Python codec registry and returns aCodecInfo object as defined below.

Encodings are first looked up in the registry’s cache. If not found, the list ofregistered search functions is scanned. If no CodecInfo object isfound, a LookupError is raised. Otherwise, the CodecInfo objectis stored in the cache and returned to the caller.

class codecs.CodecInfo(encode, decode, streamreader=None, streamwriter=None, incrementalencoder=None, incrementaldecoder=None, name=None)

Codec details when looking up the codec registry. The constructorarguments are stored in attributes of the same name:

name

The name of the encoding.

encode
decode

The stateless encoding and decoding functions. These must befunctions or methods which have the same interface asthe encode() and decode() methods of Codecinstances (see Codec Interface).The functions or methods are expected to work in a stateless mode.

incrementalencoder
incrementaldecoder

Incremental encoder and decoder classes or factory functions.These have to provide the interface defined by the base classesIncrementalEncoder and IncrementalDecoder,respectively. Incremental codecs can maintain state.

streamwriter
streamreader

Stream writer and reader classes or factory functions. These have toprovide the interface defined by the base classesStreamWriter and StreamReader, respectively.Stream codecs can maintain state.

To simplify access to the various codec components, the module providesthese additional functions which use lookup() for the codec lookup:

codecs.getencoder(encoding)

Look up the codec for the given encoding and return its encoder function.

Raises a LookupError in case the encoding cannot be found.

codecs.getdecoder(encoding)

Look up the codec for the given encoding and return its decoder function.

Raises a LookupError in case the encoding cannot be found.

codecs.getincrementalencoder(encoding)

Look up the codec for the given encoding and return its incremental encoderclass or factory function.

Raises a LookupError in case the encoding cannot be found or the codecdoesn’t support an incremental encoder.

codecs.getincrementaldecoder(encoding)

Look up the codec for the given encoding and return its incremental decoderclass or factory function.

Raises a LookupError in case the encoding cannot be found or the codecdoesn’t support an incremental decoder.

codecs.getreader(encoding)

Look up the codec for the given encoding and return its StreamReaderclass or factory function.

Raises a LookupError in case the encoding cannot be found.

codecs.getwriter(encoding)

Look up the codec for the given encoding and return its StreamWriterclass or factory function.

Raises a LookupError in case the encoding cannot be found.

Custom codecs are made available by registering a suitable codec searchfunction:

codecs.register(search_function)

Register a codec search function. Search functions are expected to take oneargument, being the encoding name in all lower case letters, and return aCodecInfo object. In case a search function cannot finda given encoding, it should return None.

Note

Search function registration is not currently reversible,which may cause problems in some cases, such as unit testing ormodule reloading.

While the builtin open() and the associated io module are therecommended approach for working with encoded text files, this moduleprovides additional utility functions and classes that allow the use of awider range of codecs when working with binary files:

codecs.open(filename, mode='r', encoding=None, errors='strict', buffering=1)

Open an encoded file using the given mode and return an instance ofStreamReaderWriter, providing transparent encoding/decoding.The default file mode is 'r', meaning to open the file in read mode.

Note

Underlying encoded files are always opened in binary mode.No automatic conversion of '\n' is done on reading and writing.The mode argument may be any binary mode acceptable to the built-inopen() function; the 'b' is automatically added.

encoding specifies the encoding which is to be used for the file.Any encoding that encodes to and decodes from bytes is allowed, andthe data types supported by the file methods depend on the codec used.

errors may be given to define the error handling. It defaults to 'strict'which causes a ValueError to be raised in case an encoding error occurs.

buffering has the same meaning as for the built-in open() function. Itdefaults to line buffered.

codecs.EncodedFile(file, data_encoding, file_encoding=None, errors='strict')

Return a StreamRecoder instance, a wrapped version of filewhich provides transparent transcoding. The original file is closedwhen the wrapped version is closed.

Data written to the wrapped file is decoded according to the givendata_encoding and then written to the original file as bytes usingfile_encoding. Bytes read from the original file are decodedaccording to file_encoding, and the result is encodedusing data_encoding.

If file_encoding is not given, it defaults to data_encoding.

errors may be given to define the error handling. It defaults to'strict', which causes ValueError to be raised in case an encodingerror occurs.

codecs.iterencode(iterator, encoding, errors='strict', **kwargs)

Uses an incremental encoder to iteratively encode the input provided byiterator. This function is a generator.The errors argument (as well as anyother keyword argument) is passed through to the incremental encoder.

This function requires that the codec accept text str objectsto encode. Therefore it does not support bytes-to-bytes encoders such asbase64_codec.

codecs.iterdecode(iterator, encoding, errors='strict', **kwargs)

Uses an incremental decoder to iteratively decode the input provided byiterator. This function is a generator.The errors argument (as well as anyother keyword argument) is passed through to the incremental decoder.

This function requires that the codec accept bytes objectsto decode. Therefore it does not support text-to-text encoders such asrot_13, although rot_13 may be used equivalently withiterencode().

The module also provides the following constants which are useful for readingand writing to platform dependent files:

codecs.BOM
codecs.BOM_BE
codecs.BOM_LE
codecs.BOM_UTF8
codecs.BOM_UTF16
codecs.BOM_UTF16_BE
codecs.BOM_UTF16_LE
codecs.BOM_UTF32
codecs.BOM_UTF32_BE
codecs.BOM_UTF32_LE

These constants define various byte sequences,being Unicode byte order marks (BOMs) for several encodings. They areused in UTF-16 and UTF-32 data streams to indicate the byte order used,and in UTF-8 as a Unicode signature. BOM_UTF16 is eitherBOM_UTF16_BE or BOM_UTF16_LE depending on the platform’snative byte order, BOM is an alias for BOM_UTF16,BOM_LE for BOM_UTF16_LE and BOM_BE forBOM_UTF16_BE. The others represent the BOM in UTF-8 and UTF-32encodings.

Codec Base Classes

The codecs module defines a set of base classes which define theinterfaces for working with codec objects, and can also be used as the basisfor custom codec implementations.

Each codec has to define four interfaces to make it usable as codec in Python:stateless encoder, stateless decoder, stream reader and stream writer. Thestream reader and writers typically reuse the stateless encoder/decoder toimplement the file protocols. Codec authors also need to define how thecodec will handle encoding and decoding errors.

Error Handlers

To simplify and standardize error handling,codecs may implement different error handling schemes byaccepting the errors string argument. The following string values aredefined and implemented by all standard Python codecs:

ValueMeaning
'strict'Raise UnicodeError (or a subclass);this is the default. Implemented instrict_errors().
'ignore'Ignore the malformed data and continuewithout further notice. Implemented inignore_errors().

The following error handlers are only applicable totext encodings:

ValueMeaning
'replace'Replace with a suitable replacementmarker; Python will use the officialU+FFFD REPLACEMENT CHARACTER for thebuilt-in codecs on decoding, and ‘?’ onencoding. Implemented inreplace_errors().
'xmlcharrefreplace'Replace with the appropriate XML characterreference (only for encoding). Implementedin xmlcharrefreplace_errors().
'backslashreplace'Replace with backslashed escape sequences.Implemented inbackslashreplace_errors().
'namereplace'Replace with \N{...} escape sequences(only for encoding). Implemented innamereplace_errors().
'surrogateescape'On decoding, replace byte with individualsurrogate code ranging from U+DC80 toU+DCFF. This code will then be turnedback into the same byte when the'surrogateescape' error handler is usedwhen encoding the data. (See PEP 383 formore.)

In addition, the following error handler is specific to the given codecs:

ValueCodecsMeaning
'surrogatepass'utf-8, utf-16, utf-32,utf-16-be, utf-16-le,utf-32-be, utf-32-leAllow encoding and decoding of surrogatecodes. These codecs normally treat thepresence of surrogates as an error.

New in version 3.1: The 'surrogateescape' and 'surrogatepass' error handlers.

Changed in version 3.4: The 'surrogatepass' error handlers now works with utf-16* and utf-32* codecs.

New in version 3.5: The 'namereplace' error handler.

Changed in version 3.5: The 'backslashreplace' error handlers now works with decoding andtranslating.

The set of allowed values can be extended by registering a new named errorhandler:

codecs.register_error(name, error_handler)

Register the error handling function error_handler under the name name.The error_handler argument will be called during encoding and decodingin case of an error, when name is specified as the errors parameter.

For encoding, error_handler will be called with a UnicodeEncodeErrorinstance, which contains information about the location of the error. Theerror handler must either raise this or a different exception, or return atuple with a replacement for the unencodable part of the input and a positionwhere encoding should continue. The replacement may be either str orbytes. If the replacement is bytes, the encoder will simply copythem into the output buffer. If the replacement is a string, the encoder willencode the replacement. Encoding continues on original input at thespecified position. Negative position values will be treated as beingrelative to the end of the input string. If the resulting position is out ofbound an IndexError will be raised.

Decoding and translating works similarly, except UnicodeDecodeError orUnicodeTranslateError will be passed to the handler and that thereplacement from the error handler will be put into the output directly.

Previously registered error handlers (including the standard error handlers)can be looked up by name:

codecs.lookup_error(name)

Return the error handler previously registered under the name name.

Raises a LookupError in case the handler cannot be found.

The following standard error handlers are also made available as module levelfunctions:

codecs.strict_errors(exception)

Implements the 'strict' error handling: each encoding ordecoding error raises a UnicodeError.

codecs.replace_errors(exception)

Implements the 'replace' error handling (for text encodings only): substitutes '?' for encoding errors(to be encoded by the codec), and '\ufffd' (the Unicode replacementcharacter) for decoding errors.

codecs.ignore_errors(exception)

Implements the 'ignore' error handling: malformed data is ignored andencoding or decoding is continued without further notice.

codecs.xmlcharrefreplace_errors(exception)

Implements the 'xmlcharrefreplace' error handling (for encoding withtext encodings only): theunencodable character is replaced by an appropriate XML character reference.

codecs.backslashreplace_errors(exception)

Implements the 'backslashreplace' error handling (fortext encodings only): malformed data isreplaced by a backslashed escape sequence.

codecs.namereplace_errors(exception)

Implements the 'namereplace' error handling (for encoding withtext encodings only): theunencodable character is replaced by a \N{...} escape sequence.

New in version 3.5.

Stateless Encoding and Decoding

The base Codec class defines these methods which also define thefunction interfaces of the stateless encoder and decoder:

Codec.encode(input[, errors])

Encodes the object input and returns a tuple (output object, length consumed).For instance, text encoding convertsa string object to a bytes object using a particularcharacter set encoding (e.g., cp1252 or iso-8859-1).

The errors argument defines the error handling to apply.It defaults to 'strict' handling.

The method may not store state in the Codec instance. UseStreamWriter for codecs which have to keep state in order to makeencoding efficient.

The encoder must be able to handle zero length input and return an empty objectof the output object type in this situation.

Codec.decode(input[, errors])

Decodes the object input and returns a tuple (output object, lengthconsumed). For instance, for a text encoding, decoding convertsa bytes object encoded using a particularcharacter set encoding to a string object.

For text encodings and bytes-to-bytes codecs,input must be a bytes object or one which provides the read-onlybuffer interface – for example, buffer objects and memory mapped files.

The errors argument defines the error handling to apply.It defaults to 'strict' handling.

The method may not store state in the Codec instance. UseStreamReader for codecs which have to keep state in order to makedecoding efficient.

The decoder must be able to handle zero length input and return an empty objectof the output object type in this situation.

Incremental Encoding and Decoding

The IncrementalEncoder and IncrementalDecoder classes providethe basic interface for incremental encoding and decoding. Encoding/decoding theinput isn’t done with one call to the stateless encoder/decoder function, butwith multiple calls to theencode()/decode() method ofthe incremental encoder/decoder. The incremental encoder/decoder keeps track ofthe encoding/decoding process during method calls.

The joined output of calls to theencode()/decode() method isthe same as if all the single inputs were joined into one, and this input wasencoded/decoded with the stateless encoder/decoder.

IncrementalEncoder Objects

The IncrementalEncoder class is used for encoding an input in multiplesteps. It defines the following methods which every incremental encoder mustdefine in order to be compatible with the Python codec registry.

class codecs.IncrementalEncoder(errors='strict')

Constructor for an IncrementalEncoder instance.

All incremental encoders must provide this constructor interface. They are freeto add additional keyword arguments, but only the ones defined here are used bythe Python codec registry.

The IncrementalEncoder may implement different error handling schemesby providing the errors keyword argument. See Error Handlers forpossible values.

The errors argument will be assigned to an attribute of the same name.Assigning to this attribute makes it possible to switch between different errorhandling strategies during the lifetime of the IncrementalEncoderobject.

encode(object[, final])

Encodes object (taking the current state of the encoder into account)and returns the resulting encoded object. If this is the last call toencode() final must be true (the default is false).

reset()

Reset the encoder to the initial state. The output is discarded: call.encode(object, final=True), passing an empty byte or text stringif necessary, to reset the encoder and to get the output.

getstate()

Return the current state of the encoder which must be an integer. Theimplementation should make sure that 0 is the most commonstate. (States that are more complicated than integers can be convertedinto an integer by marshaling/pickling the state and encoding the bytesof the resulting string into an integer).

setstate(state)

Set the state of the encoder to state. state must be an encoder statereturned by getstate().

IncrementalDecoder Objects

The IncrementalDecoder class is used for decoding an input in multiplesteps. It defines the following methods which every incremental decoder mustdefine in order to be compatible with the Python codec registry.

class codecs.IncrementalDecoder(errors='strict')

Constructor for an IncrementalDecoder instance.

All incremental decoders must provide this constructor interface. They are freeto add additional keyword arguments, but only the ones defined here are used bythe Python codec registry.

The IncrementalDecoder may implement different error handling schemesby providing the errors keyword argument. See Error Handlers forpossible values.

The errors argument will be assigned to an attribute of the same name.Assigning to this attribute makes it possible to switch between different errorhandling strategies during the lifetime of the IncrementalDecoderobject.

decode(object[, final])

Decodes object (taking the current state of the decoder into account)and returns the resulting decoded object. If this is the last call todecode() final must be true (the default is false). If final istrue the decoder must decode the input completely and must flush allbuffers. If this isn’t possible (e.g. because of incomplete byte sequencesat the end of the input) it must initiate error handling just like in thestateless case (which might raise an exception).

reset()

Reset the decoder to the initial state.

getstate()

Return the current state of the decoder. This must be a tuple with twoitems, the first must be the buffer containing the still undecodedinput. The second must be an integer and can be additional stateinfo. (The implementation should make sure that 0 is the most commonadditional state info.) If this additional state info is 0 it must bepossible to set the decoder to the state which has no input buffered and0 as the additional state info, so that feeding the previouslybuffered input to the decoder returns it to the previous state withoutproducing any output. (Additional state info that is more complicated thanintegers can be converted into an integer by marshaling/pickling the infoand encoding the bytes of the resulting string into an integer.)

setstate(state)

Set the state of the encoder to state. state must be a decoder statereturned by getstate().

Stream Encoding and Decoding

The StreamWriter and StreamReader classes provide genericworking interfaces which can be used to implement new encoding submodules veryeasily. See encodings.utf_8 for an example of how this is done.

StreamWriter Objects

The StreamWriter class is a subclass of Codec and defines thefollowing methods which every stream writer must define in order to becompatible with the Python codec registry.

class codecs.StreamWriter(stream, errors='strict')

Constructor for a StreamWriter instance.

All stream writers must provide this constructor interface. They are free to addadditional keyword arguments, but only the ones defined here are used by thePython codec registry.

The stream argument must be a file-like object open for writingtext or binary data, as appropriate for the specific codec.

The StreamWriter may implement different error handling schemes byproviding the errors keyword argument. See Error Handlers forthe standard error handlers the underlying stream codec may support.

The errors argument will be assigned to an attribute of the same name.Assigning to this attribute makes it possible to switch between different errorhandling strategies during the lifetime of the StreamWriter object.

write(object)

Writes the object’s contents encoded to the stream.

writelines(list)

Writes the concatenated list of strings to the stream (possibly by reusingthe write() method). The standard bytes-to-bytes codecsdo not support this method.

reset()

Flushes and resets the codec buffers used for keeping state.

Calling this method should ensure that the data on the output is put intoa clean state that allows appending of new fresh data without having torescan the whole stream to recover state.

In addition to the above methods, the StreamWriter must also inheritall other methods and attributes from the underlying stream.

StreamReader Objects

The StreamReader class is a subclass of Codec and defines thefollowing methods which every stream reader must define in order to becompatible with the Python codec registry.

class codecs.StreamReader(stream, errors='strict')

Constructor for a StreamReader instance.

All stream readers must provide this constructor interface. They are free to addadditional keyword arguments, but only the ones defined here are used by thePython codec registry.

The stream argument must be a file-like object open for readingtext or binary data, as appropriate for the specific codec.

The StreamReader may implement different error handling schemes byproviding the errors keyword argument. See Error Handlers forthe standard error handlers the underlying stream codec may support.

The errors argument will be assigned to an attribute of the same name.Assigning to this attribute makes it possible to switch between different errorhandling strategies during the lifetime of the StreamReader object.

The set of allowed values for the errors argument can be extended withregister_error().

read([size[, chars[, firstline]]])

Decodes data from the stream and returns the resulting object.

The chars argument indicates the number of decodedcode points or bytes to return. The read() method willnever return more data than requested, but it might return less,if there is not enough available.

The size argument indicates the approximate maximumnumber of encoded bytes or code points to readfor decoding. The decoder can modify this setting asappropriate. The default value -1 indicates to read and decode as much aspossible. This parameter is intended toprevent having to decode huge files in one step.

The firstline flag indicates thatit would be sufficient to only return the firstline, if there are decoding errors on later lines.

The method should use a greedy read strategy meaning that it should readas much data as is allowed within the definition of the encoding and thegiven size, e.g. if optional encoding endings or state markers areavailable on the stream, these should be read too.

readline([size[, keepends]])

Read one line from the input stream and return the decoded data.

size, if given, is passed as size argument to the stream’sread() method.

If keepends is false line-endings will be stripped from the linesreturned.

readlines([sizehint[, keepends]])

Read all lines available on the input stream and return them as a list oflines.

Line-endings are implemented using the codec’s decoder method and areincluded in the list entries if keepends is true.

sizehint, if given, is passed as the size argument to the stream’sread() method.

reset()

Resets the codec buffers used for keeping state.

Note that no stream repositioning should take place. This method isprimarily intended to be able to recover from decoding errors.

In addition to the above methods, the StreamReader must also inheritall other methods and attributes from the underlying stream.

StreamReaderWriter Objects

The StreamReaderWriter is a convenience class that allows wrappingstreams which work in both read and write modes.

The design is such that one can use the factory functions returned by thelookup() function to construct the instance.

class codecs.StreamReaderWriter(stream, Reader, Writer, errors='strict')

Creates a StreamReaderWriter instance. stream must be a file-likeobject. Reader and Writer must be factory functions or classes providing theStreamReader and StreamWriter interface resp. Error handlingis done in the same way as defined for the stream readers and writers.

StreamReaderWriter instances define the combined interfaces ofStreamReader and StreamWriter classes. They inherit all othermethods and attributes from the underlying stream.

StreamRecoder Objects

The StreamRecoder translates data from one encoding to another,which is sometimes useful when dealing with different encoding environments.

The design is such that one can use the factory functions returned by thelookup() function to construct the instance.

class codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors='strict')

Creates a StreamRecoder instance which implements a two-way conversion:encode and decode work on the frontend — the data visible tocode calling read() and write(), while Reader and Writerwork on the backend — the data in stream.

You can use these objects to do transparent transcodings from e.g. Latin-1to UTF-8 and back.

The stream argument must be a file-like object.

The encode and decode arguments mustadhere to the Codec interface. Reader andWriter must be factory functions or classes providing objects of theStreamReader and StreamWriter interface respectively.

Error handling is done in the same way as defined for the stream readers andwriters.

StreamRecoder instances define the combined interfaces ofStreamReader and StreamWriter classes. They inherit all othermethods and attributes from the underlying stream.

Encodings and Unicode

Strings are stored internally as sequences of code points inrange 0x00x10FFFF. (See PEP 393 formore details about the implementation.)Once a string object is used outside of CPU and memory, endiannessand how these arrays are stored as bytes become an issue. As with othercodecs, serialising a string into a sequence of bytes is known as encoding,and recreating the string from the sequence of bytes is known as decoding.There are a variety of different text serialisation codecs, which arecollectivity referred to as text encodings.

The simplest text encoding (called 'latin-1' or 'iso-8859-1') mapsthe code points 0–255 to the bytes 0x00xff, which means that a stringobject that contains code points above U+00FF can’t be encoded with thiscodec. Doing so will raise a UnicodeEncodeError that lookslike the following (although the details of the error message may differ):UnicodeEncodeError: 'latin-1' codec can't encode character '\u1234' inposition 3: ordinal not in range(256).

There’s another group of encodings (the so called charmap encodings) that choosea different subset of all Unicode code points and how these code points aremapped to the bytes 0x00xff. To see how this is done simply opene.g. encodings/cp1252.py (which is an encoding that is used primarily onWindows). There’s a string constant with 256 characters that shows you whichcharacter is mapped to which byte value.

All of these encodings can only encode 256 of the 1114112 code pointsdefined in Unicode. A simple and straightforward way that can store each Unicodecode point, is to store each code point as four consecutive bytes. There are twopossibilities: store the bytes in big endian or in little endian order. Thesetwo encodings are called UTF-32-BE and UTF-32-LE respectively. Theirdisadvantage is that if e.g. you use UTF-32-BE on a little endian machine youwill always have to swap bytes on encoding and decoding. UTF-32 avoids thisproblem: bytes will always be in natural endianness. When these bytes are readby a CPU with a different endianness, then bytes have to be swapped though. Tobe able to detect the endianness of a UTF-16 or UTF-32 byte sequence,there’s the so called BOM (“Byte Order Mark”). This is the Unicode characterU+FEFF. This character can be prepended to every UTF-16 or UTF-32byte sequence. The byte swapped version of this character (0xFFFE) is anillegal character that may not appear in a Unicode text. So when thefirst character in an UTF-16 or UTF-32 byte sequenceappears to be a U+FFFE the bytes have to be swapped on decoding.Unfortunately the character U+FEFF had a second purpose asa ZERO WIDTH NO-BREAK SPACE: a character that has no width and doesn’t allowa word to be split. It can e.g. be used to give hints to a ligature algorithm.With Unicode 4.0 using U+FEFF as a ZERO WIDTH NO-BREAK SPACE has beendeprecated (with U+2060 (WORD JOINER) assuming this role). NeverthelessUnicode software still must be able to handle U+FEFF in both roles: as a BOMit’s a device to determine the storage layout of the encoded bytes, and vanishesonce the byte sequence has been decoded into a string; as a ZERO WIDTHNO-BREAK SPACE it’s a normal character that will be decoded like any other.

There’s another encoding that is able to encoding the full range of Unicodecharacters: UTF-8. UTF-8 is an 8-bit encoding, which means there are no issueswith byte order in UTF-8. Each byte in a UTF-8 byte sequence consists of twoparts: marker bits (the most significant bits) and payload bits. The marker bitsare a sequence of zero to four 1 bits followed by a 0 bit. Unicode characters areencoded like this (with x being payload bits, which when concatenated give theUnicode character):

RangeEncoding
U-00000000U-0000007F0xxxxxxx
U-00000080U-000007FF110xxxxx 10xxxxxx
U-00000800U-0000FFFF1110xxxx 10xxxxxx 10xxxxxx
U-00010000U-0010FFFF11110xxx 10xxxxxx 10xxxxxx 10xxxxxx

The least significant bit of the Unicode character is the rightmost x bit.

As UTF-8 is an 8-bit encoding no BOM is required and any U+FEFF character inthe decoded string (even if it’s the first character) is treated as a ZEROWIDTH NO-BREAK SPACE.

Without external information it’s impossible to reliably determine whichencoding was used for encoding a string. Each charmap encoding candecode any random byte sequence. However that’s not possible with UTF-8, asUTF-8 byte sequences have a structure that doesn’t allow arbitrary bytesequences. To increase the reliability with which a UTF-8 encoding can bedetected, Microsoft invented a variant of UTF-8 (that Python 2.5 calls"utf-8-sig") for its Notepad program: Before any of the Unicode charactersis written to the file, a UTF-8 encoded BOM (which looks like this as a bytesequence: 0xef, 0xbb, 0xbf) is written. As it’s rather improbablethat any charmap encoded file starts with these byte values (which would e.g.map to

LATIN SMALL LETTER I WITH DIAERESIS
RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
INVERTED QUESTION MARK

in iso-8859-1), this increases the probability that a utf-8-sig encoding can becorrectly guessed from the byte sequence. So here the BOM is not used to be ableto determine the byte order used for generating the byte sequence, but as asignature that helps in guessing the encoding. On encoding the utf-8-sig codecwill write 0xef, 0xbb, 0xbf as the first three bytes to the file. Ondecoding utf-8-sig will skip those three bytes if they appear as the firstthree bytes in the file. In UTF-8, the use of the BOM is discouraged andshould generally be avoided.

Standard Encodings

Python comes with a number of codecs built-in, either implemented as C functionsor with dictionaries as mapping tables. The following table lists the codecs byname, together with a few common aliases, and the languages for which theencoding is likely used. Neither the list of aliases nor the list of languagesis meant to be exhaustive. Notice that spelling alternatives that only differ incase or use a hyphen instead of an underscore are also valid aliases; therefore,e.g. 'utf-8' is a valid alias for the 'utf_8' codec.

CPython implementation detail: Some common encodings can bypass the codecs lookup machinery toimprove performance. These optimization opportunities are onlyrecognized by CPython for a limited set of (case insensitive)aliases: utf-8, utf8, latin-1, latin1, iso-8859-1, iso8859-1, mbcs(Windows only), ascii, us-ascii, utf-16, utf16, utf-32, utf32, andthe same using underscores instead of dashes. Using alternativealiases for these encodings may result in slower execution.

Changed in version 3.6: Optimization opportunity recognized for us-ascii.

Many of the character sets support the same languages. They vary in individualcharacters (e.g. whether the EURO SIGN is supported or not), and in theassignment of characters to code positions. For the European languages inparticular, the following variants typically exist:

  • an ISO 8859 codeset
  • a Microsoft Windows code page, which is typically derived from an 8859 codeset,but replaces control characters with additional graphic characters
  • an IBM EBCDIC code page
  • an IBM PC code page, which is ASCII compatible
CodecAliasesLanguages
ascii646, us-asciiEnglish
big5big5-tw, csbig5Traditional Chinese
big5hkscsbig5-hkscs, hkscsTraditional Chinese
cp037IBM037, IBM039English
cp273273, IBM273, csIBM273

German

New in version 3.4.

cp424EBCDIC-CP-HE, IBM424Hebrew
cp437437, IBM437English
cp500EBCDIC-CP-BE, EBCDIC-CP-CH,IBM500Western Europe
cp720 Arabic
cp737 Greek
cp775IBM775Baltic languages
cp850850, IBM850Western Europe
cp852852, IBM852Central and Eastern Europe
cp855855, IBM855Bulgarian, Byelorussian,Macedonian, Russian, Serbian
cp856 Hebrew
cp857857, IBM857Turkish
cp858858, IBM858Western Europe
cp860860, IBM860Portuguese
cp861861, CP-IS, IBM861Icelandic
cp862862, IBM862Hebrew
cp863863, IBM863Canadian
cp864IBM864Arabic
cp865865, IBM865Danish, Norwegian
cp866866, IBM866Russian
cp869869, CP-GR, IBM869Greek
cp874 Thai
cp875 Greek
cp932932, ms932, mskanji, ms-kanjiJapanese
cp949949, ms949, uhcKorean
cp950950, ms950Traditional Chinese
cp1006 Urdu
cp1026ibm1026Turkish
cp11251125, ibm1125, cp866u, ruscii

Ukrainian

New in version 3.4.

cp1140ibm1140Western Europe
cp1250windows-1250Central and Eastern Europe
cp1251windows-1251Bulgarian, Byelorussian,Macedonian, Russian, Serbian
cp1252windows-1252Western Europe
cp1253windows-1253Greek
cp1254windows-1254Turkish
cp1255windows-1255Hebrew
cp1256windows-1256Arabic
cp1257windows-1257Baltic languages
cp1258windows-1258Vietnamese
cp65001 

Windows only: Windows UTF-8(CP_UTF8)

New in version 3.3.

euc_jpeucjp, ujis, u-jisJapanese
euc_jis_2004jisx0213, eucjis2004Japanese
euc_jisx0213eucjisx0213Japanese
euc_kreuckr, korean, ksc5601,ks_c-5601, ks_c-5601-1987,ksx1001, ks_x-1001Korean
gb2312chinese, csiso58gb231280,euc-cn, euccn, eucgb2312-cn,gb2312-1980, gb2312-80,iso-ir-58Simplified Chinese
gbk936, cp936, ms936Unified Chinese
gb18030gb18030-2000Unified Chinese
hzhzgb, hz-gb, hz-gb-2312Simplified Chinese
iso2022_jpcsiso2022jp, iso2022jp,iso-2022-jpJapanese
iso2022_jp_1iso2022jp-1, iso-2022-jp-1Japanese
iso2022_jp_2iso2022jp-2, iso-2022-jp-2Japanese, Korean, SimplifiedChinese, Western Europe, Greek
iso2022_jp_2004iso2022jp-2004,iso-2022-jp-2004Japanese
iso2022_jp_3iso2022jp-3, iso-2022-jp-3Japanese
iso2022_jp_extiso2022jp-ext, iso-2022-jp-extJapanese
iso2022_krcsiso2022kr, iso2022kr,iso-2022-krKorean
latin_1iso-8859-1, iso8859-1, 8859,cp819, latin, latin1, L1West Europe
iso8859_2iso-8859-2, latin2, L2Central and Eastern Europe
iso8859_3iso-8859-3, latin3, L3Esperanto, Maltese
iso8859_4iso-8859-4, latin4, L4Baltic languages
iso8859_5iso-8859-5, cyrillicBulgarian, Byelorussian,Macedonian, Russian, Serbian
iso8859_6iso-8859-6, arabicArabic
iso8859_7iso-8859-7, greek, greek8Greek
iso8859_8iso-8859-8, hebrewHebrew
iso8859_9iso-8859-9, latin5, L5Turkish
iso8859_10iso-8859-10, latin6, L6Nordic languages
iso8859_11iso-8859-11, thaiThai languages
iso8859_13iso-8859-13, latin7, L7Baltic languages
iso8859_14iso-8859-14, latin8, L8Celtic languages
iso8859_15iso-8859-15, latin9, L9Western Europe
iso8859_16iso-8859-16, latin10, L10South-Eastern Europe
johabcp1361, ms1361Korean
koi8_r Russian
koi8_t 

Tajik

New in version 3.5.

koi8_u Ukrainian
kz1048kz_1048, strk1048_2002, rk1048

Kazakh

New in version 3.5.

mac_cyrillicmaccyrillicBulgarian, Byelorussian,Macedonian, Russian, Serbian
mac_greekmacgreekGreek
mac_icelandmacicelandIcelandic
mac_latin2maclatin2, maccentraleuropeCentral and Eastern Europe
mac_romanmacroman, macintoshWestern Europe
mac_turkishmacturkishTurkish
ptcp154csptcp154, pt154, cp154,cyrillic-asianKazakh
shift_jiscsshiftjis, shiftjis, sjis,s_jisJapanese
shift_jis_2004shiftjis2004, sjis_2004,sjis2004Japanese
shift_jisx0213shiftjisx0213, sjisx0213,s_jisx0213Japanese
utf_32U32, utf32all languages
utf_32_beUTF-32BEall languages
utf_32_leUTF-32LEall languages
utf_16U16, utf16all languages
utf_16_beUTF-16BEall languages
utf_16_leUTF-16LEall languages
utf_7U7, unicode-1-1-utf-7all languages
utf_8U8, UTF, utf8all languages
utf_8_sig all languages

Changed in version 3.4: The utf-16* and utf-32* encoders no longer allow surrogate code points(U+D800U+DFFF) to be encoded.The utf-32* decoders no longer decodebyte sequences that correspond to surrogate code points.

Python Specific Encodings

A number of predefined codecs are specific to Python, so their codec names haveno meaning outside Python. These are listed in the tables below based on theexpected input and output types (note that while text encodings are the mostcommon use case for codecs, the underlying codec infrastructure supportsarbitrary data transforms rather than just text encodings). For asymmetriccodecs, the stated purpose describes the encoding direction.

Text Encodings

The following codecs provide str to bytes encoding andbytes-like object to str decoding, similar to the Unicode textencodings.

CodecAliasesPurpose
idna Implements RFC 3490,see alsoencodings.idna.Only errors='strict'is supported.
mbcsansi,dbcsWindows only: Encodeoperand according to theANSI codepage (CP_ACP)
oem 

Windows only: Encodeoperand according to theOEM codepage (CP_OEMCP)

New in version 3.6.

palmos Encoding of PalmOS 3.5
punycode Implements RFC 3492.Stateful codecs are notsupported.
raw_unicode_escape Latin-1 encoding with\uXXXX and\UXXXXXXXX for othercode points. Existingbackslashes are notescaped in any way.It is used in the Pythonpickle protocol.
undefined Raise an exception forall conversions, evenempty strings. The errorhandler is ignored.
unicode_escape Encoding suitable as thecontents of a Unicodeliteral in ASCII-encodedPython source code,except that quotes arenot escaped. Decodes fromLatin-1 source code.Beware that Python sourcecode actually uses UTF-8by default.
unicode_internal 

Return the internalrepresentation of theoperand. Stateful codecsare not supported.

Deprecated since version 3.3: This representation isobsoleted byPEP 393.

Binary Transforms

The following codecs provide binary transforms: bytes-like objectto bytes mappings. They are not supported by bytes.decode()(which only produces str output).

CodecAliasesPurposeEncoder / decoder
base64_codec [1]base64, base_64

Convert operand to multilineMIME base64 (the resultalways includes a trailing'\n')

Changed in version 3.4: accepts anybytes-like objectas input for encoding anddecoding

base64.encodebytes() /base64.decodebytes()
bz2_codecbz2Compress the operandusing bz2bz2.compress() /bz2.decompress()
hex_codechexConvert operand tohexadecimalrepresentation, with twodigits per bytebinascii.b2a_hex() /binascii.a2b_hex()
quopri_codecquopri,quotedprintable,quoted_printableConvert operand to MIMEquoted printablequopri.encode() withquotetabs=True /quopri.decode()
uu_codecuuConvert the operand usinguuencodeuu.encode() /uu.decode()
zlib_codeczip, zlibCompress the operandusing gzipzlib.compress() /zlib.decompress()
[1]In addition to bytes-like objects,'base64_codec' also accepts ASCII-only instances of str fordecoding

New in version 3.2: Restoration of the binary transforms.

Changed in version 3.4: Restoration of the aliases for the binary transforms.

Text Transforms

The following codec provides a text transform: a str to strmapping. It is not supported by str.encode() (which only producesbytes output).

CodecAliasesPurpose
rot_13rot13Returns the Caesar-cypherencryption of the operand

New in version 3.2: Restoration of the rot_13 text transform.

Changed in version 3.4: Restoration of the rot13 alias.

encodings.idna — Internationalized Domain Names in Applications

This module implements RFC 3490 (Internationalized Domain Names inApplications) and RFC 3492 (Nameprep: A Stringprep Profile forInternationalized Domain Names (IDN)). It builds upon the punycode encodingand stringprep.

These RFCs together define a protocol to support non-ASCII characters in domainnames. A domain name containing non-ASCII characters (such aswww.Alliancefrançaise.nu) is converted into an ASCII-compatible encoding(ACE, such as www.xn--alliancefranaise-npb.nu). The ACE form of the domainname is then used in all places where arbitrary characters are not allowed bythe protocol, such as DNS queries, HTTP Host fields, and soon. This conversion is carried out in the application; if possible invisible tothe user: The application should transparently convert Unicode domain labels toIDNA on the wire, and convert back ACE labels to Unicode before presenting themto the user.

Python supports this conversion in several ways: the idna codec performsconversion between Unicode and ACE, separating an input string into labelsbased on the separator characters defined in section 3.1 of RFC 3490and converting each label to ACE as required, and conversely separating an inputbyte string into labels based on the . separator and converting any ACElabels found into unicode. Furthermore, the socket moduletransparently converts Unicode host names to ACE, so that applications need notbe concerned about converting host names themselves when they pass them to thesocket module. On top of that, modules that have host names as functionparameters, such as http.client and ftplib, accept Unicode hostnames (http.client then also transparently sends an IDNA hostname in theHost field if it sends that field at all).

When receiving host names from the wire (such as in reverse name lookup), noautomatic conversion to Unicode is performed: Applications wishing to presentsuch host names to the user should decode them to Unicode.

The module encodings.idna also implements the nameprep procedure, whichperforms certain normalizations on host names, to achieve case-insensitivity ofinternational domain names, and to unify similar characters. The nameprepfunctions can be used directly if desired.

encodings.idna.nameprep(label)

Return the nameprepped version of label. The implementation currently assumesquery strings, so AllowUnassigned is true.

encodings.idna.ToASCII(label)

Convert a label to ASCII, as specified in RFC 3490. UseSTD3ASCIIRules isassumed to be false.

encodings.idna.ToUnicode(label)

Convert a label to Unicode, as specified in RFC 3490.

encodings.mbcs — Windows ANSI codepage

Encode operand according to the ANSI codepage (CP_ACP).

Availability: Windows only.

Changed in version 3.3: Support any error handler.

Changed in version 3.2: Before 3.2, the errors argument was ignored; 'replace' was always usedto encode, and 'ignore' to decode.

encodings.utf_8_sig — UTF-8 codec with BOM signature

This module implements a variant of the UTF-8 codec: On encoding a UTF-8 encodedBOM will be prepended to the UTF-8 encoded bytes. For the stateful encoder thisis only done once (on the first write to the byte stream). For decoding anoptional UTF-8 encoded BOM at the start of the data will be skipped.

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
python3:(unicode error) 'utf-8' codec can't decode
lua
Tchar I18N Text Abstraction
python编码错误:UnicodeDecodeError: 'utf8' codec can't decode
Python 编码为什么那么蛋疼?
Python 字符编码转换要诀
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服