python-multithreading

What is the difference between .Semaphore() and .BoundedSemaphore()?

喜你入骨 提交于 2019-12-21 02:28:30
问题 I know that threading.Lock() is equal to threading.Semaphore(1) . Is also threading.Lock() equal to threading.BoundedSemaphore(1) ? And newly I met threading.BoundedSemaphore() , what is the difference between these? such as the following code snippet (to apply limitation on threads): import threading sem = threading.Semaphore(5) sem = threading.BoundedSemaphore(5) 回答1: A Semaphore can be released more times than it's acquired, and that will raise its counter above the starting value. A

How can I have multiple clients on a TCP Python Chat Server?

冷暖自知 提交于 2019-12-21 01:12:58
问题 Any help on how I can get this to accept more than one client, and why it isn't at the moment? Thanks! Also, is there anything I'm doing wrong with this code? I've been following mostly Python 2 tutorials because I can't find any for Python 3.4 Here is my Server code: import socket import time import os from threading import Thread folderPath = "Chat Logs" filePath = folderPath + "/" + str(time.strftime("%H-%M-%S_%d-%m-%Y")) + ".txt" def clientHandler(c): while True: data = c.recv(1024) if

How can I have multiple clients on a TCP Python Chat Server?

南笙酒味 提交于 2019-12-21 01:12:32
问题 Any help on how I can get this to accept more than one client, and why it isn't at the moment? Thanks! Also, is there anything I'm doing wrong with this code? I've been following mostly Python 2 tutorials because I can't find any for Python 3.4 Here is my Server code: import socket import time import os from threading import Thread folderPath = "Chat Logs" filePath = folderPath + "/" + str(time.strftime("%H-%M-%S_%d-%m-%Y")) + ".txt" def clientHandler(c): while True: data = c.recv(1024) if

Is the max thread limit actually a non-relevant issue for Python / Linux?

百般思念 提交于 2019-12-20 11:56:09
问题 The current Python application that I'm working on has a need to utilize 1000+ threads (Pythons threading module). Not that any single thread is working at max cpu cycles, this is just a web server load test app I'm creating. I.E. emulate 200 firefox clients all longing into web server and downloading small web components, basically emulating humans that operate in seconds as opposed to microseconds. So, I was reading through the various topics such as "how many threads does python support on

How do I lock an entire SQLite connection (locked read + locked write)?

我只是一个虾纸丫 提交于 2019-12-20 06:26:29
问题 I have an sqlite3 db that is being accessed concurrently. I have ClientA that reads the state of some table (Column1 has rows A , B , C ) and needs to update the table with new letters of the alphabet. If ClientB reads the state of the table before ClientA updates the table (say with the new letter D ), then it's possible that both clients could (and in my case do ) write D to the table - such that Column1 becomes A , B , C , D , D . But I need to ensure Column1 only has unique letters! How

Performance issue in python with nested loop

烂漫一生 提交于 2019-12-20 06:09:28
问题 I was able to improve a code written in python a lot with numpy because of the dot product. Now I still have one part of the code which is still very slow. I still don't understand multithreading and if this could help here. In my opinion this should be possible here. Do you have a nice idea what to do here? for x1 in range(a**l): for x2 in range(a**l): for x3 in range(a**l): f11 = 0 cv1 = numpy.ndarray.sum( numpy.absolute(numpy.subtract(ws[x1], ws[x2]))) cv2 = numpy.ndarray.sum( numpy

Increasing throughput in a python script

╄→гoц情女王★ 提交于 2019-12-20 02:59:29
问题 I'm processing a list of thousands of domain names from a DNSBL through dig, creating a CSV of URLs and IPs. This is a very time-consuming process that can take several hours. My server's DNSBL updates every fifteen minutes. Is there a way I can increase throughput in my Python script to keep pace with the server's updates? Edit : the script, as requested. import re import subprocess as sp text = open("domainslist", 'r') text = text.read() text = re.split("\n+", text) file = open('final.csv',

Socket Thread and PyGTK

妖精的绣舞 提交于 2019-12-20 02:51:11
问题 I'm trying to write a instant messaging program, the basic ui is almost finished and i'm looking into the receiving part of messages. I have an UI class and a threaded Receive_Socket class. Each time the socket of the Received_Socket class receive a message, it does a gobject.idle_add() to call an UI method in order to display the message into a chat window. After the gobject.idle.add() line, i have a while loop which loops until the message is in fact displayed in the chat window ( I want

Python multiprocessing BETWEEN Amazon cloud instances

…衆ロ難τιáo~ 提交于 2019-12-19 09:27:42
问题 I'm looking to run a long-running python analysis process on a few Amazon EC2 instances. The code already runs using the python multiprocessing module and can take advantage of all cores on a single machine. The analysis is completely parellel and each instance does not need to communicate with any of the others. All of the work is "file-based" and each process works on each file indivually ... so I was planning on just mounting the same S3 volume across all of the nodes. I was wondering if

Why is my Python app stalled with 'system' / kernel CPU time

南楼画角 提交于 2019-12-19 07:21:28
问题 First off I wasn't sure if I should post this as a Ubuntu question or here. But I'm guessing it's more of an Python question than a OS one. My Python application is running on top of Ubuntu on a 64 core AMD server. It pulls images from 5 GigE cameras over the network by calling out to a .so through ctypes and then processes them. I am seeing frequent pauses in my application causing frames from the cameras to be dropped by the external camera library. To debug this I've used the popular