如何在整数列表中找到重复项并创建另一个包含这些重复项的列表?
如何在整数列表中找到重复项并创建另一个包含这些重复项的列表?
def duplicates(array):
seen = set()
return { val for val in array if (val in seen or seen.add(val)) }
---
duplicates(["a", "b", "c", "a", "d"])
> {'a'}
val in seen
如果seen
包含该值,则为真,因此会产生返回表达式中的元素or seen.add(val)
将元素添加到已见集合中(不在列表中包括,因为它返回None
,在这个上下文中表示为False
)另一种解决方案是不使用任何集合库,如下所示。
a = [1,2,3,5,4,6,4,21,4,6,3,32,5,2,23,5]
duplicates = []
for i in a:
if a.count(i) > 1 and i not in duplicates:
duplicates.append(i)
print(duplicates)
[2, 3, 5, 4, 6]
。使用list.count()
方法可以在列表中查找重复元素。
最初的回答
arr=[]
dup =[]
for i in range(int(input("Enter range of list: "))):
arr.append(int(input("Enter Element in a list: ")))
for i in arr:
if arr.count(i)>1 and i not in dup:
dup.append(i)
print(dup)
这是一个单行代码,用于娱乐或在需要单个语句的情况下使用。
(lambda iterable: reduce(lambda (uniq, dup), item: (uniq, dup | {item}) if item in uniq else (uniq | {item}, dup), iterable, (set(), set())))(some_iterable)
一些其他的测试。当然需要做...
set([x for x in l if l.count(x) > 1])
...太昂贵了。使用下一个最终方法可以快大约500倍(长度更长的数组效果更好):
def dups_count_dict(l):
d = {}
for item in l:
if item not in d:
d[item] = 0
d[item] += 1
result_d = {key: val for key, val in d.iteritems() if val > 1}
return result_d.keys()
只有2个循环,没有非常昂贵的l.count()
操作。
下面是一个比较方法的代码示例。代码如下,这里是输出:
dups_count: 13.368s # this is a function which uses l.count()
dups_count_dict: 0.014s # this is a final best function (of the 3 functions)
dups_count_counter: 0.024s # collections.Counter
import numpy as np
from time import time
from collections import Counter
class TimerCounter(object):
def __init__(self):
self._time_sum = 0
def start(self):
self.time = time()
def stop(self):
self._time_sum += time() - self.time
def get_time_sum(self):
return self._time_sum
def dups_count(l):
return set([x for x in l if l.count(x) > 1])
def dups_count_dict(l):
d = {}
for item in l:
if item not in d:
d[item] = 0
d[item] += 1
result_d = {key: val for key, val in d.iteritems() if val > 1}
return result_d.keys()
def dups_counter(l):
counter = Counter(l)
result_d = {key: val for key, val in counter.iteritems() if val > 1}
return result_d.keys()
def gen_array():
np.random.seed(17)
return list(np.random.randint(0, 5000, 10000))
def assert_equal_results(*results):
primary_result = results[0]
other_results = results[1:]
for other_result in other_results:
assert set(primary_result) == set(other_result) and len(primary_result) == len(other_result)
if __name__ == '__main__':
dups_count_time = TimerCounter()
dups_count_dict_time = TimerCounter()
dups_count_counter = TimerCounter()
l = gen_array()
for i in range(3):
dups_count_time.start()
result1 = dups_count(l)
dups_count_time.stop()
dups_count_dict_time.start()
result2 = dups_count_dict(l)
dups_count_dict_time.stop()
dups_count_counter.start()
result3 = dups_counter(l)
dups_count_counter.stop()
assert_equal_results(result1, result2, result3)
print 'dups_count: %.3f' % dups_count_time.get_time_sum()
print 'dups_count_dict: %.3f' % dups_count_dict_time.get_time_sum()
print 'dups_count_counter: %.3f' % dups_count_counter.get_time_sum()
raw_list = [1,2,3,3,4,5,6,6,7,2,3,4,2,3,4,1,3,4,]
clean_list = list(set(raw_list))
duplicated_items = []
for item in raw_list:
try:
clean_list.remove(item)
except ValueError:
duplicated_items.append(item)
print(duplicated_items)
# [3, 6, 2, 3, 4, 2, 3, 4, 1, 3, 4]
clean_list
)来删除重复项,然后在遍历raw_list
的同时,从clean_list
中删除每个item
以查找其在raw_list
中的出现。如果未找到item
,则会捕获引发的ValueError
异常,并将item
添加到duplicated_items
列表中。for index, item in enumerate(raw_list):
)即可。这种方法适用于大型列表(如数千个元素),速度更快、更优化。list2 = [1, 2, 3, 4, 1, 2, 3]
lset = set()
[(lset.add(item), list2.append(item))
for item in list2 if item not in lset]
print list(lset)
from toolz import frequencies, valfilter
a = [1,2,2,3,4,5,4]
>>> list(valfilter(lambda count: count > 1, frequencies(a)).keys())
[2,4]
这里有很多答案,但我认为这是相对非常易读和易于理解的方法:
def get_duplicates(sorted_list):
duplicates = []
last = sorted_list[0]
for x in sorted_list[1:]:
if x == last:
duplicates.append(x)
last = x
return set(duplicates)
注意:
set([i for i in list if sum([1 for a in list if a == i]) > 1])