@@ -11,7 +11,7 @@ msgstr ""
11
11
"Project-Id-Version : Python 3.8\n "
12
12
"Report-Msgid-Bugs-To : \n "
13
13
"POT-Creation-Date : 2020-05-05 12:54+0200\n "
14
- "PO-Revision-Date : 2020-05-23 23:48 +0200\n "
14
+ "PO-Revision-Date : 2020-05-23 23:51 +0200\n "
15
15
"Language-Team : python-doc-es\n "
16
16
"MIME-Version : 1.0\n "
17
17
"Content-Type : text/plain; charset=UTF-8\n "
@@ -2262,25 +2262,32 @@ msgstr ""
2262
2262
2263
2263
#: ../Doc/library/re.rst:1610
2264
2264
msgid "Writing a Tokenizer"
2265
- msgstr ""
2265
+ msgstr "Escribir un Tokenizer "
2266
2266
2267
2267
#: ../Doc/library/re.rst:1612
2268
2268
msgid ""
2269
2269
"A `tokenizer or scanner <https://en.wikipedia.org/wiki/Lexical_analysis>`_ "
2270
2270
"analyzes a string to categorize groups of characters. This is a useful "
2271
2271
"first step in writing a compiler or interpreter."
2272
2272
msgstr ""
2273
+ "Un `tokenizer or scanner <https://en.wikipedia.org/wiki/Lexical_analysis>`_ "
2274
+ "(tokenizador o escáner) analiza una cadena para categorizar grupos de "
2275
+ "caracteres. Este es un primer paso útil para escribir un compilador o "
2276
+ "intérprete."
2273
2277
2274
2278
#: ../Doc/library/re.rst:1616
2275
2279
msgid ""
2276
2280
"The text categories are specified with regular expressions. The technique "
2277
2281
"is to combine those into a single master regular expression and to loop over "
2278
2282
"successive matches::"
2279
2283
msgstr ""
2284
+ "Las categorías de texto se especifican con expresiones regulares. La "
2285
+ "técnica consiste en combinarlas en una única expresión regular maestra y en "
2286
+ "hacer un bucle sobre las sucesivas coincidencias::"
2280
2287
2281
2288
#: ../Doc/library/re.rst:1672
2282
2289
msgid "The tokenizer produces the following output::"
2283
- msgstr ""
2290
+ msgstr "El tokenizador produce el siguiente resultado:: "
2284
2291
2285
2292
#: ../Doc/library/re.rst:1695
2286
2293
msgid ""
0 commit comments