analysis

Matlab .txt file analyses

旧巷老猫 提交于 2019-12-24 05:40:10
问题 I am analyzing a set of text in a .txt file. the file has 30 lines and each line contains different phrases both containing text, numbers, and symbols. what's the best way to import this file into Matlab for analyses (i.e.: how many Capital I's are in the text file or how many #text phrases are in the file (analyzing tweets on each line) 回答1: I think you'd best read the file line-by-line and save each line in a cell of a cell array: fid = fopen(filename); txtlines = cell(0); tline = fgetl(fid

CERN ROOT exporting data to plain text

耗尽温柔 提交于 2019-12-24 03:59:20
问题 So I have tried and tried to follow similar questions asked like this one, but to no success. It's really simple - I have some .root files and can see the histograms in ROOT but want to export the data as a .txt or similar to be able to analyse it in other programs. 回答1: Here is working example. Reads in a root file with three branches, named TS, ns, and nserr. #include <iostream> #include "TFile.h" #include "TTree.h" #include <fstream> using namespace std; void dumpTreeTotxt(){ TFile *f=new

Invalid 'n' argument error in readBin() when trying to load a large (4GB+ audio file)

萝らか妹 提交于 2019-12-24 03:44:07
问题 I'm trying to load a sample from a 4GB+ mono WAV file (total file duration 24h, I'm loading a 15min slice). library(tuneR) so <- readWave( "file.wav", from = 1, to = 15, units = "minutes" ) This is the traceback Error in readBin(con, int, n = N, size = bytes, signed = (bytes != 1), : invalid 'n' argument 2 readBin(con, int, n = N, size = bytes, signed = (bytes != 1), endian = "little") 1 readWave(filePath, from = 1, to = 15, units = "minutes") This happens for every 'from' and 'to' params (5

Big tables and analysis in MySql

与世无争的帅哥 提交于 2019-12-24 02:58:07
问题 For my startup, I track everything myself rather than rely on google analytics. This is nice because I can actually have ips and user ids and everything. This worked well until my tracking table rose about 2 million rows. The table is called acts , and records: ip url note account_id ...where available. Now, trying to do something like this: SELECT COUNT(distinct ip) FROM acts JOIN users ON(users.ip = acts.ip) WHERE acts.url LIKE '%some_marketing_page%'; Basically never finishes. I switched

Get the result of an analysis as a JSON in SolR

梦想与她 提交于 2019-12-24 02:43:22
问题 I made a filter in SolR as following : <fieldtype name="tokenization_stopwords" class="solr.TextField" positionIncrementGap="100"> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/> </analyzer> </fieldtype> When I'm using this filter in the "Analysis" section, it works. However, I'd like to be able to get the result of this analysis as a JSON. Does anyone knows how to do this ? 回答1: In

Trouble with nested for-loop running time

旧巷老猫 提交于 2019-12-24 00:54:41
问题 I have been thinking over this problem for a few days now and am hung up on calculating the number of times the second nested for-loop will run. I believe that I have the correct formula for determining the running time for the other two for-loops, but this third one has me hung up. I have the first loop running n-1 times. The equation to determine the number of times loop #2 runs is; The summation of 1 to n-1. If anyone could help me understand how to find the number of times loop #3 runs it

Algorithm complexity when faced with substraction in value

和自甴很熟 提交于 2019-12-23 20:15:55
问题 I have the following formula that I have to simplify in order to get the time complexity of my algorithm : (n^2-n)/3. Are there any rules that apply that can allow me to simplify this expression even further to a more "common" Θ(n^2) or something like that (I'm assuming that's what the result will be, might be wrong). I simply don't know how to deal with the substraction here. Usually, if two values add each other, you only consider the one that is the highest to analyse the complexity of the

Finding several regions of interest in an array

匆匆过客 提交于 2019-12-23 17:24:42
问题 Say I have conducted an experiment where I've left a python program running for some long time and in that time I've taken several measurements of some quantity against time. Each measurement is separated by some value between 1 and 3 seconds with the time step used much smaller than that... say 0.01s. An example of such an even if you just take the y axis might look like: [...0,1,-1,4,1,0,0,2,3,1,0,-1,2,3,5,7,8,17,21,8,3,1,0,0,-2,-17,-20,-10,-3,3,1,0,-2,-1,1,0,0,1,-1,0,0,2,0...] Here we have

Calculating a group mean while excluding each cases individual value

ぐ巨炮叔叔 提交于 2019-12-23 16:26:11
问题 I have a dataset with 70 cases (participants in a study). Is there a function that can calculate the mean of these 70 cases such that each individual case is not included in the analysis. This would look like: "mean for case x = (value(1) + ... value(n) - value(x))/n" Any information will help. 回答1: You could just do what you've suggested and remove each case from the total: x <- c(1:10) (sum(x) - x) / (length(x) - 1) #[1] 6.000000 5.888889 5.777778 5.666667 5.555556 5.444444 5.333333 5

Roslyn : How to get the Namespace of a DeclarationSyntax with Roslyn C#

这一生的挚爱 提交于 2019-12-23 08:04:12
问题 I have a c# solution that contains some class files. With Roslyn I am able to parse a solution to obtain a list of projects within the solution. From there, I can get the documents in each project. Then, I can get a list of every ClassDeclarationSyntax. This is the starting point. foreach (var v in _solution.Projects) { //Console.WriteLine(v.Name.ToString()); foreach (var document in v.Documents) { SemanticModel model = document.GetSemanticModelAsync().Result; var classes = document