average

Cumulative sums, moving averages, and SQL “group by” equivalents in R

為{幸葍}努か 提交于 2019-11-30 09:24:39
What's the most efficient way to create a moving average or rolling sum in R? How do you do the rolling function along with a "group by"? While zoo is great, sometimes there are simpler ways. If you data behaves nicely, and is evenly spaced, the embed() function effectively lets you create multiple lagged version of a time series. If you look inside the VARS package for vector auto-regression, you will see that the package author chooses this route. For example, to calculate the 3 period rolling average of x, where x = (1 -> 20)^2: > x <- (1:20)^2 > embed (x, 3) [,1] [,2] [,3] [1,] 9 4 1 [2,]

Math average with php

人盡茶涼 提交于 2019-11-30 08:44:17
Time to test your math skills... I'm using php to find the average of $num1, $num2, $num3 and so on; upto an unset amount of numbers. It then saves that average to a database. Next time the php script is called a new number is added to the mix. Is there a math (most likely algebra) equation that I can use to find the average of the original numbers with the new number included. Or do I need to save the original numbers in the database so I can query them and re-calculate the entire bunch of numbers together? If what you mean by average is mean and you don't want to store all numbers then store

Average of latest N records per group

天大地大妈咪最大 提交于 2019-11-30 08:17:29
问题 My current application calculates a point average based on all records for each user: SELECT `user_id`, AVG(`points`) AS pts FROM `players` WHERE `points` != 0 GROUP BY `user_id` The business requirement has changed and I need to calculate the average based on the last 30 records for each user. The relevant tables have the following structure: table: players; columns: player_id, user_id, match_id, points table: users; columns: user_id The following query does not work, but it does demonstrate

Select average from MySQL table with LIMIT

眉间皱痕 提交于 2019-11-30 07:47:55
问题 I am trying to get the average of the lowest 5 priced items, grouped by the username attached to them. However, the below query gives the average price for each user (which of course is the price), but I just want one answer returned. SELECT AVG(price) FROM table WHERE price > '0' && item_id = '$id' GROUP BY username ORDER BY price ASC LIMIT 5 回答1: I think this is what you're after: SELECT AVG(items.price) FROM (SELECT t.price FROM TABLE t WHERE t.price > '0' AND t.item_id = '$id' ORDER BY t

calculate and print the average value of strings in a column

我怕爱的太早我们不能终老 提交于 2019-11-30 06:00:56
问题 I got a .txt file with 2 columns of values. They are 2D coordinates, so the first column represent the x value and the second one is the z value. Unfortunately there are some lines with the same x value but a different z value. I'd like to calculate the average of the z values in order to associate a single z to a single x. A sample of what i have is: 435.212 108.894 435.212 108.897 435.212 108.9 435.212 108.903 As you can see the x value 435.212 is associated with 4 different z value. What i

calculate sending file speed/sec by taking the average of 5 times of sent bytes [duplicate]

為{幸葍}努か 提交于 2019-11-29 18:46:38
This question already has an answer here: Calculate speed per sec and time left of sending a file using sockets tcp c# 3 answers im trying to calculate the transfer file speed per second using the average i took the different between the sent bytes sum and the prevSum 5 times per second does the code below give me the correct speed? should i change the rate array size ? or should i change Thread.Sleep(value) ? im so confused because each time a change a little thing the speed value changes.. what's the correct solution for that ?? static long prevSum = 0; static long[] rate = new long[5];

Pandas dataframe: Group by two columns and then average over another column

瘦欲@ 提交于 2019-11-29 18:26:06
问题 Assuming that I have a dataframe with the following values: df: col1 col2 value 1 2 3 1 2 1 2 3 1 I want to first groupby my dataframe based on the first two columns (col1 and col2) and then average over values of the thirs column (value). So the desired output would look like this: col1 col2 avg-value 1 2 2 2 3 1 I am using the following code: columns = ['col1','col2','avg'] df = pd.DataFrame(columns=columns) df.loc[0] = [1,2,3] df.loc[1] = [1,3,3] print(df[['col1','col2','avg']].groupby(

Counting and computing the average length of words in ruby

守給你的承諾、 提交于 2019-11-29 18:02:01
I'm trying to debug a program in ruby that is meant to compute and print the average length of words in an array. words = ['Four', 'score', 'and', 'seven', 'years', 'ago', 'our', 'fathers', 'brought', 'forth', 'on', 'this', 'continent', 'a', 'new', 'nation', 'conceived', 'in', 'Liberty', 'and', 'dedicated', 'to', 'the', 'proposition', 'that', 'all', 'men', 'are', 'created', 'equal'] word_lengths = Array.new words.each do |word| word_lengths << word_to.s end sum = 0 word_lengths.each do |word_length| sum += word_length end average = sum.to_s/length.size puts "The average is " + average.to_s

Calculate the average value of a mongodb document [duplicate]

和自甴很熟 提交于 2019-11-29 15:58:23
This question already has an answer here: Mongo average aggregation query with no group 3 answers Suppose that I have a collection like this: { _id: 1, city: "New York", state: "NY", murders: 328 } { _id: 2, city: "Los Angeles", state: "CA", murders: 328 } ... The collection shows us the number of murders in all cities of USA. I'd like to calculate the average of murders in all the country. I tried to use $group: db.murders.aggregate([{$group: {_id:"$state", pop: {$avg:"$murders"} } }]) But I get as result the murders by state: { "_id" : "NY", "murders" : 200 } { "_id" : "NJ", "murders" : 150

Mysql Average on time column?

China☆狼群 提交于 2019-11-29 15:55:09
问题 SELECT avg( duration ) as average FROM login ; The datatype for duration is "time", thus my value is like: 00:00:14, 00:20:23 etc I execute the query it gives me: 2725.78947368421 What is that? I want in time format, can mysql do the average on time?? 回答1: Try this: SELECT SEC_TO_TIME(AVG(TIME_TO_SEC(`login`))) FROM Table1; Test data: CREATE TABLE `login` (duration TIME NOT NULL); INSERT INTO `login` (duration) VALUES ('00:00:20'), ('00:01:10'), ('00:20:15'), ('00:06:50'); Result: 00:07:09 来源