Python删除注释的正则表达式

4

我想在Python文件中删除所有的注释。文件格式如下:

--------------- comment.py ---------------
# this is comment line.
age = 18  # comment in line
msg1 = "I'm #1."  # comment. there's a # in code.
msg2 = 'you are #2. ' + 'He is #3'  # strange sign ' # ' in comment. 
print('Waiting your answer')

我写了很多正则表达式来提取所有的评论,其中一些如下:

(?(?<=['"])(?<=['"])\s*#.*$|\s*#.*$)
get:  #1."  # comment. there's a # in code.

(?<=('|")[^\1]*\1)\s*#.*$|\s*#.*$
wrong. it's not 0-width in lookaround (?<=..)

但是它并不起作用。正确的正则表达式是什么? 请帮助我,谢谢!


1
你可能不需要编写解析器来正确处理所有这些边缘情况。 - Tim Biegeleisen
2
使用正则表达式解析代码是一个不好的主意。结果会得到一个非常庞大的表达式,速度非常慢。 - Olvin Roght
谢谢您的建议。昨天我想放弃了,然后写了\s*#[^'"]*$来处理通常情况。但是Python IDLE可以解决所有情况,我不知道Python IDLE是否使用了正则表达式? - Mal Sund
2个回答

1
你可以尝试使用tokenize代替regex,正如@OlvinRoght所说,在这种情况下使用正则表达式解析代码可能不是一个好主意。 如你所见here,你可以尝试像这样检测注释:
import tokenize
fileObj = open('yourpath\comment.py', 'r')
for toktype, tok, start, end, line in tokenize.generate_tokens(fileObj.readline):
    # we can also use token.tok_name[toktype] instead of 'COMMENT'
    # from the token module 
    if toktype == tokenize.COMMENT:
        print('COMMENT' + " " + tok)

输出:

COMMENT # -*- coding: utf-8 -*-
COMMENT # this is comment line.
COMMENT # comment in line
COMMENT # comment. there's a # in code.
COMMENT # strange sign ' # ' in comment.

然后,为了得到预期的结果,也就是没有注释的Python文件,您可以尝试这个方法:
nocomments=[]
for toktype, tok, start, end, line in tokenize.generate_tokens(fileObj.readline):
    if toktype != tokenize.COMMENT:
        nocomments.append(tok)

print(' '.join(nocomments))

输出:

 age = 18 
 msg1 = "I'm #1." 
 msg2 = 'you are #2. ' + 'He is #3' 
 print ( 'Waiting your answer' )  

1
在这种情况下,tokenize比re更好。 - Mal Sund

1

信用:https://gist.github.com/BroHui/aca2b8e6e6bdf3cb4af4b246c9837fa3

这个可以。它使用tokenize。您可以根据自己的需要修改此代码。

""" Strip comments and docstrings from a file.
"""

import sys, token, tokenize

def do_file(fname):
    """ Run on just one file.
    """
    source = open(fname)
    mod = open(fname + ",strip", "w")

    prev_toktype = token.INDENT
    first_line = None
    last_lineno = -1
    last_col = 0

    tokgen = tokenize.generate_tokens(source.readline)
    for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
        if 0:   # Change to if 1 to see the tokens fly by.
            print("%10s %-14s %-20r %r" % (
                tokenize.tok_name.get(toktype, toktype),
                "%d.%d-%d.%d" % (slineno, scol, elineno, ecol),
                ttext, ltext
                ))
        if slineno > last_lineno:
            last_col = 0
        if scol > last_col:
            mod.write(" " * (scol - last_col))
        if toktype == token.STRING and prev_toktype == token.INDENT:
            # Docstring
            mod.write("#--")
        elif toktype == tokenize.COMMENT:
            # Comment
            mod.write("\n")
        else:
            mod.write(ttext)
        prev_toktype = toktype
        last_col = ecol
        last_lineno = elineno

if __name__ == '__main__':
    do_file("text.txt")

text.txt:

# this is comment line.
age = 18  # comment in line
msg1 = "I'm #1."  # comment. there's a # in code.
msg2 = 'you are #2. ' + 'He is #3'  # strange sign ' # ' in comment. 
print('Waiting your answer')

输出:

age = 18  

msg1 = "I'm #1."  

msg2 = 'you are #2. ' + 'He is #3'  

print('Waiting your answer')

输入:

msg1 = "I'm #1."  # comment. there's a # in code.  the regex#.*$ will match #1."  # comment. there's a # in code. . Right match shoud be # comment. there's a # in code.

输出:

msg1 = "I'm #1."  

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接