throughput

BLE peripheral throughput limit

拜拜、爱过 提交于 2019-12-18 13:13:18
问题 We are developing a BLE sensor Peripheral to work with an iPad, that requires the following throughput of data on the BLE notification characteristic (no acknowledge) using a TI CC2541 BLE module and a custom profile: One 20 bytes (GATT maximum standard packet) every 10ms, or since we appear to have a limit of 4 packets per connection interval, this equates to one connection interval every 40ms. Throughput required is 2,000 bytes per second, the TI website recommends the CC2541 BLE solution

How to measure network throughput during runtime

谁说胖子不能爱 提交于 2019-12-12 10:15:54
问题 I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality. So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option

How do I analyze JMeter summary results

元气小坏坏 提交于 2019-12-12 06:37:00
问题 I know that this question has been asked here earlier but I am still not able to figure out what is the significance of the average,min,max and throughput parameters in the Jmeter summary report ? Here are is JMeter setup: No. of threads:5000 Ramp-up period : 1 Loop Count: 1 Results : Average:738 Min:155 Max:2228 Throughput:60.5% So does that mean that my 5k requests took 738 milliseconds(0.7 s) to complete ? or it means that every single request took 0.7s to complete ? Similar, what shall be

Simulating CPU-intensive Task having fixed amount of work

最后都变了- 提交于 2019-12-12 02:45:38
问题 I am simulating a CPU-Bound task. Each CPU-bound task calculate factorial of 800. Each CPU-bound task is a Runnable object executed by Thread. When i increases number of threads(Each thread runs a Runnable tak) then i found some threads runs so fast in such a way that service times tends to Zero.I could not understand this behaviour.The codes are as follows. import java.math.BigInteger; public class CpuBoundJob implements Runnable { public void run() { BigInteger factValue = BigInteger.ONE;

Graph for individual sampler throughput in grafana using jmeter and influx db

旧时模样 提交于 2019-12-12 01:38:07
问题 Im trying to graph throughput of individual samplers i have in jmeter ,in grafana using influx db. Bellow is my jmeter test with 3 thread group having dummy sampler. 1. 2. 3. According to how jmeter calculates throughput , throughput for very first second should be 10 and after 10 seconds throughput should be 2 ,similarly after 20 sec throughput should be 5. I've attached influx db Screenshot bellow Using this im plotting graph in grafana: Bellow is what i've got: However in graph as you can

Choose primary index for Global secondary index

浪子不回头ぞ 提交于 2019-12-11 14:55:42
问题 I'm reading the AWS docs about secondary indices and I don't understand the following statement: The index key does not need to have any of the key attributes from the table From what I understand GSI allows me to create a primary or sort key on an attirubte in my table after its creation. I would like to make sure I understand the statement above, does it mean exactly that I can create a primary or sort key on an attribute that is different from the current table's primary/hash key? 回答1: Yes

Python ftplib: low download & upload speeds when using python ftplib

我与影子孤独终老i 提交于 2019-12-10 19:59:08
问题 I was wondering if any one observed that the time taken to download or upload a file over ftp using Python's ftplib is very large as compared to performing FTP get/put over windows command prompt or using Perl's Net::FTP module. I created a simple FTP client similar to http://code.activestate.com/recipes/521925-python-ftp-client/ but I am unable to attain the speed which I get when running FTP at the Windows DOS prompt or using perl. Is there something I am missing or is it a problem with the

Why is the throughput of this C# data processing app so much lower than the raw capabilities of the server?

和自甴很熟 提交于 2019-12-10 15:48:45
问题 I have put together a small test harness to diagnose why the throughput of my C# data processing application (its core function selects records in batches of 100 from a remote database server using non-blocking IO and performs simple processing on them) is much lower than it could be. I've observed that while running, the app encounters no bottlenecks in the way of CPU (<3%), network or disk IO, or RAM and does not stress the database server (the data set on the database is almost always

Cassandra slowed down with more nodes

我与影子孤独终老i 提交于 2019-12-06 11:33:20
I set up a Cassandra cluster on AWS. What I want to get is increased I/O throughput (number of reads/writes per second) as more nodes are added (as advertised). However, I got exactly the opposite. The performance is reduced as new nodes are added. Do you know any typical issues that prevents it from scaling? Here is some details: I am adding a text file (15MB) to the column family. Each line is a record. There are 150000 records. When there is 1 node, it takes about 90 seconds to write. But when there are 2 nodes, it takes 120 seconds. I can see the data is spread to 2 nodes. However, there

Can I get value of actual write capacity of DynamoDB or DynamoDB2 table

孤街浪徒 提交于 2019-12-06 06:48:41
问题 Suppose I access an existing DynamoDB import boto conn = boto.connect_dynamodb(...) table = conn.get_table(tableName) or a DynamoDB2 import boto from boto.dynamodb2.layer1 import DynamoDBConnection from boto.dynamodb2.table import Table conn = DynamoDBConnection(...) table = Table(tableName, connection=conn) table. I want to know how much data was written to it right before I accessed it. So I don't want the provisioned write throughput value but the actual throughput. How can I get this info