indexing

Thinking Sphinx Rake aborted, searchd is running while rebuilding or start/stop ts. Index works fine

牧云@^-^@ 提交于 2019-12-23 09:29:42
问题 When I invoke rake ts:rebuild RAILS_ENV=production , I get the following: (in /var/www/abc.com/public/abc/releases/20101008073517) ** Erubis 2.6.6 Stopped search daemon (pid 22531). Generating Configuration to /var/www/abc.com/public/abc/releases/20101008073517/config/production.sphinx.conf Sphinx 1.10-beta (r2420) Copyright (c) 2001-2010, Andrew Aksyonoff Copyright (c) 2008-2010, Sphinx Technologies Inc (http://sphinxsearch.com) using config file '/var/www/abc.com/public/abc/releases

Slicing multiple column ranges from a dataframe using iloc [duplicate]

风格不统一 提交于 2019-12-23 09:05:03
问题 This question already has answers here : What would be Python/pandas equivalent of this R code for rearranging columns of a dataframe? (3 answers) Closed 2 years ago . I have a df with 32 columns df.shape (568285, 32) I am trying to rearrange the columns in a specific way, and drop the first column using iloc df = df.iloc[:,[31,[1:23],24,25,26,28,27,29,30]] ^ SyntaxError: invalid syntax is this the right way to do it? 回答1: You could use the np.r_ indexer. class RClass(AxisConcatenator) |

Index sizes in MySQL

核能气质少年 提交于 2019-12-23 08:58:59
问题 I am just starting to investigate optimization for my MySQL database. From what I'm reading, indexing seems like a good idea - so I am wanting to create an index on one of my VARCHAR columns, on a table using the MyISAM engine. From what I'm reading, I understand that an index is limited to a size of 1,000 bytes. A VARCHAR character is 3 bytes in size, though. Does that mean that if I want to index a VARCHAR column with 50 rows, I need an index prefix of 6 characters? (1,000 bytes / 50 rows /

How long should it take to build an index using ALTER TABLE in MySQL?

邮差的信 提交于 2019-12-23 08:54:52
问题 This might be a bit like asking how long a length of string is, but the stats are: Intel dual core 4GB RAM Table with 8million rows, ~ 20 columns, mostly varchars with an auto_increment primary id Query is: ALTER TABLE my_table ADD INDEX my_index (my_column); my_column is varchar(200) Storage is MyISAM Order of magnitude, should it be 1 minute, 10 minutes, 100 minutes? Thanks Edit: OK it took 2 hours 37 minutes, compared to 0 hours 33 mins on a lesser spec machine, with essentially identical

does a primary key speed up an index?

旧巷老猫 提交于 2019-12-23 08:51:25
问题 Aside from the convenient auto-increment and UNIQUE features, does the PK actually speed up the index? Will the speed be the same whether it's a non-PKed indexed INT or PKed (same column, two different tests)? If I had the same column on the same table on the same system, will it be faster if a UNIQUE INT column with an index also has PK enabled? Does PK make the index it coexists with faster? Please, actual results only with system stats if you could be so kind. 回答1: The primary key for a

Slow MySQL inserts

半腔热情 提交于 2019-12-23 08:26:42
问题 I am using and working on software which uses MySQL as a backend engine (it can use others such as PostgreSQL or Oracle or SQLite, but this is the main application we are using). The software was design in such way that the binary data we want to access is kept as BLOBs in individual columns (each table has one BLOB column, other columns have integers/floats to characterize the BLOB , and one string column with the BLOB 's MD5 hash). The tables have typically 2, 3 or 4 indexes, one of which

Postgres choosing BTREE instead of BRIN index

一曲冷凌霜 提交于 2019-12-23 07:36:34
问题 I'm running Postgres 9.5 and am playing around with BRIN indexes. I have a fact table with about 150 million rows and I'm trying to get PG to use a BRIN index. My query is: select sum(transaction_amt), sum (total_amt) from fact_transaction where transaction_date_key between 20170101 and 20170201 I created both a BTREE index and a BRIN index (default pages_per_range value of 128) on column transaction_date_key (the above query is referring to January to February 2017). I would have thought

Unsigned versus signed numbers as indexes

微笑、不失礼 提交于 2019-12-23 07:30:18
问题 Whats the rationale for using signed numbers as indexes in .Net? In Python, you can index from the end of an array by sending negative numbers, but this is not the case in .Net. It's not easy for .Net to add such a feature later as it could break other code perhaps using special rules (yeah, a bad idea, but I guess it happens) on indexing. Not that I have ever have needed to index arrays over 2,147,483,647 in size, but I really cannot understand why they choose signed numbers. Can it be

Unsigned versus signed numbers as indexes

扶醉桌前 提交于 2019-12-23 07:30:05
问题 Whats the rationale for using signed numbers as indexes in .Net? In Python, you can index from the end of an array by sending negative numbers, but this is not the case in .Net. It's not easy for .Net to add such a feature later as it could break other code perhaps using special rules (yeah, a bad idea, but I guess it happens) on indexing. Not that I have ever have needed to index arrays over 2,147,483,647 in size, but I really cannot understand why they choose signed numbers. Can it be

Is there a way to prevent Googlebot from indexing certain parts of a page?

霸气de小男生 提交于 2019-12-23 07:27:31
问题 Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest? There are a couple of different issues we've come across which would be helped by this, such as: RSS feed/news ticker-type text on a page displaying content from an external source users entering contact phone etc. details who want them visible on the site but would rather they not be google-able I'm aware that both of the above can be addressed via other techniques