Что такое экранирование в python
Перейти к содержимому

Что такое экранирование в python

  • автор:

2. Lexical analysis¶

A Python program is read by a parser. Input to the parser is a stream of tokens, generated by the lexical analyzer. This chapter describes how the lexical analyzer breaks a file into tokens.

Python reads program text as Unicode code points; the encoding of a source file can be given by an encoding declaration and defaults to UTF-8, see PEP 3120 for details. If the source file cannot be decoded, a SyntaxError is raised.

2.1. Line structure¶

A Python program is divided into a number of logical lines.

2.1.1. Logical lines¶

The end of a logical line is represented by the token NEWLINE. Statements cannot cross logical line boundaries except where NEWLINE is allowed by the syntax (e.g., between statements in compound statements). A logical line is constructed from one or more physical lines by following the explicit or implicit line joining rules.

2.1.2. Physical lines¶

A physical line is a sequence of characters terminated by an end-of-line sequence. In source files and strings, any of the standard platform line termination sequences can be used — the Unix form using ASCII LF (linefeed), the Windows form using the ASCII sequence CR LF (return followed by linefeed), or the old Macintosh form using the ASCII CR (return) character. All of these forms can be used equally, regardless of platform. The end of input also serves as an implicit terminator for the final physical line.

When embedding Python, source code strings should be passed to Python APIs using the standard C conventions for newline characters (the \n character, representing ASCII LF, is the line terminator).


A comment starts with a hash character ( # ) that is not part of a string literal, and ends at the end of the physical line. A comment signifies the end of the logical line unless the implicit line joining rules are invoked. Comments are ignored by the syntax.

2.1.4. Encoding declarations¶

If a comment in the first or second line of the Python script matches the regular expression coding[=:]\s*([-\w.]+) , this comment is processed as an encoding declaration; the first group of this expression names the encoding of the source code file. The encoding declaration must appear on a line of its own. If it is the second line, the first line must also be a comment-only line. The recommended forms of an encoding expression are

which is recognized also by GNU Emacs, and

which is recognized by Bram Moolenaar’s VIM.

If no encoding declaration is found, the default encoding is UTF-8. In addition, if the first bytes of the file are the UTF-8 byte-order mark ( b’\xef\xbb\xbf’ ), the declared file encoding is UTF-8 (this is supported, among others, by Microsoft’s notepad).

If an encoding is declared, the encoding name must be recognized by Python (see Standard Encodings ). The encoding is used for all lexical analysis, including string literals, comments and identifiers.

2.1.5. Explicit line joining¶

Two or more physical lines may be joined into logical lines using backslash characters ( \ ), as follows: when a physical line ends in a backslash that is not part of a string literal or comment, it is joined with the following forming a single logical line, deleting the backslash and the following end-of-line character. For example:

A line ending in a backslash cannot carry a comment. A backslash does not continue a comment. A backslash does not continue a token except for string literals (i.e., tokens other than string literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere on a line outside a string literal.

2.1.6. Implicit line joining¶

Expressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes. For example:

Implicitly continued lines can carry comments. The indentation of the continuation lines is not important. Blank continuation lines are allowed. There is no NEWLINE token between implicit continuation lines. Implicitly continued lines can also occur within triple-quoted strings (see below); in that case they cannot carry comments.

2.1.7. Blank lines¶

A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored (i.e., no NEWLINE token is generated). During interactive input of statements, handling of a blank line may differ depending on the implementation of the read-eval-print loop. In the standard interactive interpreter, an entirely blank logical line (i.e. one containing not even whitespace or a comment) terminates a multi-line statement.

2.1.8. Indentation¶

Leading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the indentation level of the line, which in turn is used to determine the grouping of statements.

Tabs are replaced (from left to right) by one to eight spaces such that the total number of characters up to and including the replacement is a multiple of eight (this is intended to be the same rule as used by Unix). The total number of spaces preceding the first non-blank character then determines the line’s indentation. Indentation cannot be split over multiple physical lines using backslashes; the whitespace up to the first backslash determines the indentation.

Indentation is rejected as inconsistent if a source file mixes tabs and spaces in a way that makes the meaning dependent on the worth of a tab in spaces; a TabError is raised in that case.

Cross-platform compatibility note: because of the nature of text editors on non-UNIX platforms, it is unwise to use a mixture of spaces and tabs for the indentation in a single source file. It should also be noted that different platforms may explicitly limit the maximum indentation level.

A formfeed character may be present at the start of the line; it will be ignored for the indentation calculations above. Formfeed characters occurring elsewhere in the leading whitespace have an undefined effect (for instance, they may reset the space count to zero).

The indentation levels of consecutive lines are used to generate INDENT and DEDENT tokens, using a stack, as follows.

Before the first line of the file is read, a single zero is pushed on the stack; this will never be popped off again. The numbers pushed on the stack will always be strictly increasing from bottom to top. At the beginning of each logical line, the line’s indentation level is compared to the top of the stack. If it is equal, nothing happens. If it is larger, it is pushed on the stack, and one INDENT token is generated. If it is smaller, it must be one of the numbers occurring on the stack; all numbers on the stack that are larger are popped off, and for each number popped off a DEDENT token is generated. At the end of the file, a DEDENT token is generated for each number remaining on the stack that is larger than zero.

Here is an example of a correctly (though confusingly) indented piece of Python code:

The following example shows various indentation errors:

(Actually, the first three errors are detected by the parser; only the last error is found by the lexical analyzer — the indentation of return r does not match a level popped off the stack.)

2.1.9. Whitespace between tokens¶

Except at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens. Whitespace is needed between two tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is one token, but a b is two tokens).

2.2. Other tokens¶

Besides NEWLINE, INDENT and DEDENT, the following categories of tokens exist: identifiers, keywords, literals, operators, and delimiters. Whitespace characters (other than line terminators, discussed earlier) are not tokens, but serve to delimit tokens. Where ambiguity exists, a token comprises the longest possible string that forms a legal token, when read from left to right.

2.3. Identifiers and keywords¶

Identifiers (also referred to as names) are described by the following lexical definitions.

The syntax of identifiers in Python is based on the Unicode standard annex UAX-31, with elaboration and changes as defined below; see also PEP 3131 for further details.

Within the ASCII range (U+0001..U+007F), the valid characters for identifiers are the same as in Python 2.x: the uppercase and lowercase letters A through Z , the underscore _ and, except for the first character, the digits 0 through 9 .

Python 3.0 introduces additional characters from outside the ASCII range (see PEP 3131). For these characters, the classification uses the version of the Unicode Character Database as included in the unicodedata module.

Identifiers are unlimited in length. Case is significant.

The Unicode category codes mentioned above stand for:

Lu — uppercase letters

Ll — lowercase letters

Lt — titlecase letters

Lm — modifier letters

Lo — other letters

Nl — letter numbers

Mn — nonspacing marks

Mc — spacing combining marks

Nd — decimal numbers

Pc — connector punctuations

Other_ID_Start — explicit list of characters in PropList.txt to support backwards compatibility

All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC.

A non-normative HTML file listing all valid identifier characters for Unicode 14.0.0 can be found at https://www.unicode.org/Public/14.0.0/ucd/DerivedCoreProperties.txt

2.3.1. Keywords¶

The following identifiers are used as reserved words, or keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here:

2.3.2. Soft Keywords¶

New in version 3.10.

Some identifiers are only reserved under specific contexts. These are known as soft keywords. The identifiers match , case and _ can syntactically act as keywords in contexts related to the pattern matching statement, but this distinction is done at the parser level, not when tokenizing.

As soft keywords, their use with pattern matching is possible while still preserving compatibility with existing code that uses match , case and _ as identifier names.

2.3.3. Reserved classes of identifiers¶

Certain classes of identifiers (besides keywords) have special meanings. These classes are identified by the patterns of leading and trailing underscore characters:

Not imported by from module import * .

In a case pattern within a match statement, _ is a soft keyword that denotes a wildcard .

Separately, the interactive interpreter makes the result of the last evaluation available in the variable _ . (It is stored in the builtins module, alongside built-in functions like print .)

Elsewhere, _ is a regular identifier. It is often used to name “special” items, but it is not special to Python itself.

The name _ is often used in conjunction with internationalization; refer to the documentation for the gettext module for more information on this convention.

It is also commonly used for unused variables.

System-defined names, informally known as “dunder” names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the Special method names section and elsewhere. More will likely be defined in future versions of Python. Any use of __*__ names, in any context, that does not follow explicitly documented use, is subject to breakage without warning.

Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between “private” attributes of base and derived classes. See section Identifiers (Names) .

2.4. Literals¶

Literals are notations for constant values of some built-in types.

2.4.1. String and Bytes literals¶

String literals are described by the following lexical definitions:

One syntactic restriction not indicated by these productions is that whitespace is not allowed between the stringprefix or bytesprefix and the rest of the literal. The source character set is defined by the encoding declaration; it is UTF-8 if no encoding declaration is given in the source file; see section Encoding declarations .

In plain English: Both types of literals can be enclosed in matching single quotes ( ‘ ) or double quotes ( " ). They can also be enclosed in matching groups of three single or double quotes (these are generally referred to as triple-quoted strings). The backslash ( \ ) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character.

Bytes literals are always prefixed with ‘b’ or ‘B’ ; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes.

Both string and bytes literals may optionally be prefixed with a letter ‘r’ or ‘R’ ; such strings are called raw strings and treat backslashes as literal characters. As a result, in string literals, ‘\U’ and ‘\u’ escapes in raw strings are not treated specially. Given that Python 2.x’s raw unicode literals behave differently than Python 3.x’s the ‘ur’ syntax is not supported.

New in version 3.3: The ‘rb’ prefix of raw bytes literals has been added as a synonym of ‘br’ .

New in version 3.3: Support for the unicode legacy literal ( u’value’ ) was reintroduced to simplify the maintenance of dual Python 2.x and 3.x codebases. See PEP 414 for more information.

A string literal with ‘f’ or ‘F’ in its prefix is a formatted string literal; see Formatted string literals . The ‘f’ may be combined with ‘r’ , but not with ‘b’ or ‘u’ , therefore raw formatted strings are possible, but formatted bytes literals are not.

In triple-quoted literals, unescaped newlines and quotes are allowed (and are retained), except that three unescaped quotes in a row terminate the literal. (A “quote” is the character used to open the literal, i.e. either ‘ or " .)

Unless an ‘r’ or ‘R’ prefix is present, escape sequences in string and bytes literals are interpreted according to rules similar to those used by Standard C. The recognized escape sequences are:

Escaping metacharacters

This chapter will show how to match metacharacters literally. Examples will be discussed for both manually as well as programmatically constructed patterns. You’ll also learn about escape sequences supported by the re module.

Escaping with backslash

You have seen a few metacharacters and escape sequences that help to compose a RE. To match the metacharacters literally, i.e. to remove their special meaning, prefix those characters with a \ (backslash) character. To indicate a literal \ character, use \\ . This assumes you are using raw strings and not normal strings.

As emphasized earlier, regular expressions is just another tool to process text. Some examples and exercises presented in this book can be solved using normal string methods as well. It is a good practice to reason out whether regular expressions is needed for a given problem.


Okay, what if you have a string variable that must be used to construct a RE — how to escape all the metacharacters? Relax, the re.escape() function has got you covered. No need to manually take care of all the metacharacters or worry about changes in future versions.

Recall that in the Alternation section, join was used to dynamically construct RE pattern from an iterable of strings. However, that didn’t handle metacharacters. Here are some examples on how you can use re.escape() so that the resulting pattern will match the strings from the input iterable literally.

Escape sequences

Certain characters like tab and newline can be expressed using escape sequences as \t and \n respectively. These are similar to how they are treated in normal string literals. However, \b is for word boundaries as seen earlier, whereas it stands for the backspace character in normal string literals.

The full list is mentioned at the end of docs.python: Regular Expression Syntax section as \a \b \f \n \N \r \t \u \U \v \x \\ . Do read the documentation for details as well as how it differs for byte data.

If an escape sequence is not defined, you’ll get an error.

You can also represent a character using hexadecimal escape of the format \xNN where NN are exactly two hexadecimal characters. If you represent a metacharacter using escapes, it will be treated literally instead of its metacharacter feature.

See ASCII code table for a handy cheatsheet with all the ASCII characters and their hexadecimal representations.

Octal escapes will be discussed in the Backreference section. The Codepoints and Unicode escapes section will discuss escapes for unicode characters using \u and \U .

Cheatsheet and Summary

This short chapter discussed how to match metacharacters literally. re.escape() helps if you are using input strings sourced from elsewhere to build the final RE. You also saw how to use escape sequences to represent characters and how they differ from normal string literals.


a) Transform the given input strings to the expected output using the same logic on both strings.

b) Replace (4)\| with 2 only at the start or end of the given input strings.

c) Replace any matching element from the list items with X for given the input strings. Match the elements from items literally. Assume no two elements of items will result in any matching conflict.

d) Replace the backspace character \b with a single space character for the given input string.

e) Replace all occurrences of \e with e .

f) Replace any matching item from the list eqns with X for given the string ip . Match the items from eqns literally.

Python: Экранирование символов

В Python экранирование символов используется для представления определенных специальных символов, таких как обратный слеш (\), кавычки (‘, “) и других. Экранирование символов осуществляется при помощи обратного слеша (\\).

Экранирование кавычек

В Python для экранирования кавычек используется обратный слеш (\\). Например:

‘это \’пример\’ строки’

Экранирование обратного слеша

Обратный слеш также может быть экранирован с помощью другого обратного слеша. Например:

‘это \\\\ пример строки’

Специальные символы

Существуют также специальные экранированные символы, которые представляют собой специальные символы. Например:

  • \n — перевод строки
  • \t — табуляция
  • \r — возврат каретки

Например, можно использовать \n для создания новой строки:

‘это \\n пример строки’

Неформатированные строки

В Python также можно использовать неформатированные строки (raw strings), которые не обрабатывают экранированные символы. Они создаются путем предварительного добавления символа r перед строкой. Например:

python экранирование символов

r’это \\n пример строки’

Спецсимволы, экранирование символов и raw-строки в Python

Спецсимволы — символы, которые отражают некое действие, например, символ новой строки, символ табуляции или клавиши BackSpace.

Вот основные спецсимволы Python:

список спецсимволов python

Здесь разберем самые основные, которые наиболее часто встречаются на практике.

Символ перевода строки \n

Если функция print() встречает в строке символ \n то следующий за ним текст переносится на новую строку:

Как видно переносится даже пробел, который идет после символа перевода строки \n .

Мы можем записать и полностью все слитно, и тогда лишних пробелов не будет:

Если проверить длину строки, то можно убедиться, что перенос строки является одним символом, хотя по факту состоит из двух — слеш и буква n:

Слово «привет» — 6 символов, слово «мир» — 3 символа, восклицательный знак — 1 символ и перенос строки — 1 символ.

Символ табуляции \t

Если функция print() встречает в строке символ \t то следующий за ним текст сдвигается на размер табуляции:

Символ обратного слеша \\

Этот символ нужен, когда необходимо вывести слеш в функции print() , но есть, какие либо помехи, например другие спецсимволы. А также для экранирования символов.

Например, нужно вывести такую строку Папка \name\ :

Мы получили не то, что хотели, вместо имени папки Python увидел символ переноса строки \n .

Чтобы такого не происходило нужно использовать символ обратного слеша:

Экранирование символов

Экранирование нужно для того, чтобы мы могли видеть при использовании функции print() , те данные, которые и планируем видеть.

Например, если в кавычках будут еще одни кавычки, это вызовет ошибку:

Для избегания этой ошибки необходимо экранировать кавычки слешем:

Также можно вывести кавычки внутри других кавычек, если они разные:

Т.е. здесь у нас строка создается в одинарных кавычках, а в самом тексте строки используются двойные кавычки.

Таким образом, можно экранировать любые спецсимволы.

«Сырые» raw-строки в Python

Сырые или необработанные строки нужны для того, чтобы выводить строки в буквальном виде, так как они записаны.

Например, создадим строку с экранированными символами:

Видно, что слеш, для экранирования символов не выводится.

А если мы выведем raw-строку (для этого достаточно просто добавить буку r ):

То мы видим, что вывелись абсолютно все символы, которые есть в строке, включая слеши для экранирования.

Raw-строки очень часто используют для вывода адресов и путей к файлам, чтобы не экранировать символы.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *