sf

Fastest way to determine COUNTRY from millions of GPS coordinates [R]

丶灬走出姿态 提交于 2019-12-04 13:45:50
I have millions of GPS coordinates and want to quickly add a column of the country of the coordinates. My current method works but is extremely slow: library(data.table) #REPRODUCE DATA data <- data.table(latitude=sample(seq(47,52,by=0.001), 1000000, replace = TRUE), longitude=sample(seq(8,23,by=0.001), 1000000, replace = TRUE)) #REQUIRED PACKAGES if (!require("sp")) install.packages("sp") if (!require("rworldmap")) install.packages("rworldmap") if (!require("sf")) install.packages("sf") library(sp) library(rworldmap) library(sf) #CURRENT SLOW FUNCTION coords2country = function(points,latcol

Convert sets of spatial coordinates to polygons in R using sf

爱⌒轻易说出口 提交于 2019-12-04 13:14:50
Each element of my list contains a set of spatial coordinates that I would like to convert to polygons using sf. Each set of coordinates is sorted in the order I would like to "connect the dots" and the first and last rows are identical, to close the polygons. Each list element is named with a unique identifier which I would like to retain as an attribute in the sf output. I have adapted code from the sf-related answer here: Convert sequence of longitude and latitude to polygon via sf in R but my case differs in that I have multiple sets of coordinates (each of which should produce a separate

Fastest way to extract a raster in R (improve the time of my reproducible code)

风格不统一 提交于 2019-12-04 11:47:25
I'm wondering if I have maximized the speed at which a mean of an area buffered around a point in a raster can be extracted. Can performance be improved any further on these LOCALLY? I use parallel mclapply already, and I know I could get further gains by setting up and running this on a cluster (use a cluster or get more cpu's is not the answer I'm looking for). Replicate some data: library(raster) library(parallel) library(truncnorm) library(gdalUtils) library(velox) library(sf) ras <- raster(ncol=1000, nrow=1000, xmn=2001476, xmx=11519096, ymn=9087279, ymx=17080719) ras[]=rtruncnorm(n=ncell

Efficient way to plot data on an irregular grid

巧了我就是萌 提交于 2019-12-04 11:11:33
问题 I work with satellite data organized on an irregular two-dimensional grid whose dimensions are scanline (along track dimension) and ground pixel (across track dimension). Latitude and longitude information for each centre pixel are stored in auxiliary coordinate variables, as well as the four corners coordinate pairs (latitude and longitude coordinates are given on the WGS84 reference ellipsoid). The data is stored in netCDF4 files. What I am trying to do is efficiently plotting these files

polygons from coordinates

◇◆丶佛笑我妖孽 提交于 2019-12-04 08:10:39
I've got a data.frame with lat s and lng s that define the boundaries of rectangular boxes, like so geohash north_lat south_lat east_lng west_lng 1 gbsuv 48.69141 48.64746 -4.306641 -4.350586 2 gbsuy 48.69141 48.64746 -4.262695 -4.306641 What's the easiest way to convert this into an sf object that holds a column of POLYGON s? The key to creating polygons is that the coordinates have to be in sequence to form a closed area (i.e., the last point is the same as the first point). So your data will need a bit of manipulation to create the coordinates, and put them in order. In my example I've done

Create a map with colored polygons and coordinate points by using a .shp file in combination with another dataframe with coordinates

非 Y 不嫁゛ 提交于 2019-12-04 06:02:06
问题 I have the following map boundaries in this .gdb folder and here I have a csv which contains the variables that I want to plot and the coordinates of the points that need to be displayed on the map. My final goal is to create a map with polygons and inside every polygon there should be points according to the coordinates. Every polygon should be colored according to the count of studentid (students) for the year 2019. Any alternative is accepted I believe that the 1st code chunk below is

Creating a regular polygon grid over a spatial extent, rotated by a given angle

不想你离开。 提交于 2019-12-03 16:52:34
问题 Hi all, I am struggling with this and hope someone could come out with a simple solution. My objective is to create a regular polygon grid over the extent of a polygon, but rotated by a user-defined angle . I know that I can easily create a North/South polygon grid in sf using for example: library(sf) #> Linking to GEOS 3.6.2, GDAL 2.2.3, proj.4 4.9.3 inpoly <- st_read(system.file("shape/nc.shp", package="sf"))[1,] %>% sf::st_transform(3857) %>% sf::st_geometry() grd <- sf::st_make_grid

Efficient way to plot data on an irregular grid

风格不统一 提交于 2019-12-03 06:14:43
I work with satellite data organized on an irregular two-dimensional grid whose dimensions are scanline (along track dimension) and ground pixel (across track dimension). Latitude and longitude information for each centre pixel are stored in auxiliary coordinate variables, as well as the four corners coordinate pairs (latitude and longitude coordinates are given on the WGS84 reference ellipsoid). The data is stored in netCDF4 files. What I am trying to do is efficiently plotting these files (and possibly a combination of files—next step!) on a projected map. My approach so far, inspired by

Efficient extraction of all sub-polygons generated by self-intersecting features in a MultiPolygon

≡放荡痞女 提交于 2019-12-03 01:19:06
Starting from a shapefile containing a fairly large number (about 20000) of potentially partially-overlapping polygons, I'd need to extract all the sub-polygons originated by intersecting their different "boundaries". In practice, starting from some mock-up data: library(tibble) library(dplyr) library(sf) ncircles <- 9 rmax <- 120 x_limits <- c(-70,70) y_limits <- c(-30,30) set.seed(100) xy <- data.frame( id = paste0("id_", 1:ncircles), x = runif(ncircles, min(x_limits), max(x_limits)), y = runif(ncircles, min(y_limits), max(y_limits))) %>% as_tibble() polys <- st_as_sf(xy, coords = c(2,3)) %>

Intersection of polygons in R using sf

荒凉一梦 提交于 2019-12-02 08:30:29
问题 I want to assess the degree of spatial proximity of each point to other equivalent points by looking at the number of others within 400m (5 minute walk). I have some points on a map. I can draw a simple 400 m buffer around them. I want to determine which buffers overlap and then count the number of overlaps. This number of overlaps should relate back to the original point so I can see which point has the highest number of overlaps and therefore if I were to walk 400 m from that point I could