seq

TCP 的那些事儿

末鹿安然 提交于 2019-12-18 10:51:57
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> TCP 的那些事儿(上) TCP是一个巨复杂的协议,因为他要解决很多问题,而这些问题又带出了很多子问题和阴暗面。所以学习TCP本身是个比较痛苦的过程,但对于学习的过程却能让人有很多收获。关于TCP这个协议的细节,我还是推荐你去看W.Richard Stevens的《TCP/IP 详解 卷1:协议》(当然,你也可以去读一下RFC793以及后面N多的RFC)。另外,本文我会使用英文术语,这样方便你通过这些英文关键词来查找相关的技术文档。 之所以想写这篇文章,目的有三个: 一个是想锻炼一下自己是否可以用简单的篇幅把这么复杂的TCP协议描清楚的能力。 另一个是觉得现在的好多码农基本上不会认认真真地读本书,喜欢快餐文化,所以,希望这篇快餐文章可以让你对TCP这个古典技术有所了解,并能体会到软件设计中的种种难处。并且你可以从中有一些软件设计上的收获。 最重要的希望这些基础知识可以让你搞清很多以前一些似是而非的东西,并且你能意识到基础的重要。 所以,本文不会面面俱到,只是对TCP协议、算法和原理的科普。 我本来只想写一个篇幅的文章的,但是TCP真TMD的复杂,比C++复杂多了,这30多 Year来,各种优化变种争论和修改。所以,写着写着就发现只有砍成两篇: 上篇中,主要向你介绍TCP协议的定义和丢包时的重传机制。 下篇中

using seq_along() to handle the empty case

萝らか妹 提交于 2019-12-14 02:12:46
问题 I read that using seq_along() allows to handle the empty case much better, but this concept is not so clear in my mind. For example, I have this data frame: df a b c d 1 1.2767671 0.133558438 1.5582137 0.6049921 2 -1.2133819 -0.595845408 -0.9492494 -0.9633872 3 0.4512179 0.425949910 0.1529301 -0.3012190 4 1.4945791 0.211932487 -1.2051334 0.1218442 5 2.0102918 0.135363711 0.2808456 1.1293810 6 1.0827021 0.290615747 2.5339719 -0.3265962 7 -0.1107592 -2.762735937 -0.2428827 -0.3340126 8 0

Sequentially index between a boolean vector in R [duplicate]

半腔热情 提交于 2019-12-14 01:09:24
问题 This question already has answers here : Create counter within consecutive runs of certain values (5 answers) Closed last year . The title says it. my vector TF <- c(F,T,T,T,F,F,T,T,F,T,F,T,T,T,T,T,T,T,F) my desired output [1] 0 1 2 3 0 0 1 2 0 1 0 1 2 3 4 5 6 7 0 回答1: #with(rle(TF), sequence(lengths) * rep(values, lengths)) with(rle(TF), sequence(lengths) * TF) #Like Rich suggested in comments # [1] 0 1 2 3 0 0 1 2 0 1 0 1 2 3 4 5 6 7 0 回答2: You could use rle() along with sequence() .

Selecting column sequences and creating variables

微笑、不失礼 提交于 2019-12-13 09:15:31
问题 I was wondering if there was a way to select specific columns via a sequence and create new variables from this. So for example, if I had 8 columns with n observations, how could I create 4 variables that selects 2 rows sequentially? My dataset is much larger than this and I have 1416 variables with 62 observations each (I have pasted a link to the spreadsheet below, whereby the first column and row represent names). I would like to create new dataframes from this named as sites 1-12. So site

Number and letter numbering of tables in word with cross-referencing

谁说我不能喝 提交于 2019-12-13 01:00:12
问题 How do I get a smart numbering system as shown below? Whenever I have a new table, I want the numbering to rise. If, on the other hand, I add a row to the table, I would like to add a letter in behind the numbering. Is this possible? I have startet using Field Codes and Sequences, and I believe it is the way to go. I know the numbering without the letters can be obtained by Number #{ STYLEREF 1\s}{SEQ Table \# "00"} I also know that alphabetic numbering can be made by using the \alphabetic

Check if element exists in vector R

不问归期 提交于 2019-12-12 22:47:49
问题 I am fighting with some weird behavior of R. Can someone explain what is happening? In the following example, check is false in the first example, and true in the second one. Why is seq different to c ? by <- 0.1 percentage <- 60 probs <- seq(0,1,by) checkValues <- probs * 100 check <- percentage %in% checkValues probs <- c(0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1) checkValues <- probs * 100 check <- percentage %in% checkValues It is getting even weirder, as if I set by <- 0.25 and percentage

How to declare a sparse Vector in Spark with Scala?

…衆ロ難τιáo~ 提交于 2019-12-12 19:33:15
问题 I'm trying to create a sparse Vector (the mllib.linalg.Vectors class, not the default one) but I can't understand how to use Seq. I have a small test file with three numbers/line, which I convert to an rdd, split the text in doubles and then group the lines by their first column. Test file 1 2 4 1 3 5 1 4 8 2 7 5 2 8 4 2 9 10 Code val data = sc.textFile("/home/savvas/DWDM/test.txt") val data2 = data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))) val grouped = data2.groupBy( _(0) ) This

Concatenate many Future[Seq] into one Future[Seq]

拥有回忆 提交于 2019-12-12 15:19:15
问题 Without Future, that's how I combine all smaller Seq into one big Seq with a flatmap category.getCategoryUrlKey(id: Int):Seq[Meta] // main method val appDomains: Seq[Int] val categories:Seq[Meta] = appDomains.flatMap(category.getCategoryUrlKey(_)) Now the method getCategoryUrlKey could fail. I put a circuit breaker in front to avoid to call it for the next elements after an amount of maxFailures . Now the circuit breaker doesn't return a Seq but a Future[Seq] lazy val breaker = new akka

Create a time series by 30 minute intervals

非 Y 不嫁゛ 提交于 2019-12-12 09:58:02
问题 I am trying to create a time series with 30 min intervals. I used the following command with the output also shown: ts = seq(as.POSIXct("2009-01-01 00:00"), as.POSIXct("2014-12-31 23:30"),by = "hour") "2010-02-21 12:00:00 EST" "2010-02-21 13:00:00 EST" "2010-02-21 14:00:00 EST" When I change it to by ="min" it changes to be every minute. How do I create a time series with every 30 minute intervals? 回答1: You can specify minutes in the by argument, and pass the time zone "UTC" as Adrian pointed

Spark Flatten Seq by reversing groupby, (i.e. repeat header for each sequence in it)

一笑奈何 提交于 2019-12-12 09:04:43
问题 We have an RDD with the following form: org.apache.spark.rdd.RDD[((BigInt, String), Seq[(BigInt, Int)])] What we would like to do is flatten that into a single list of tab delimited strings to save with saveAsText file. And by flatten, I mean repeat the groupby tuple (BigInt, String) for each item in its Seq. So the data that looks like.. ((x1,x2), ((y1.1,y1.2), (y2.1, y2.2) .... )) ... Will wind up looking like x1 x2 y1.1 y1.2 x1 x2 y2.1 y2.2 So far the code I've tried mostly flattens it all