aggregation

Understanding UML of DoFactory Design Pattern - Decorator

陌路散爱 提交于 2019-12-05 09:54:03
I am trying to understand UML diagram describing Decorator Pattern at link below http://www.dofactory.com/Patterns/PatternDecorator.aspx I don't understand why there is a "Aggregation" relation between Decorator and Component. I believe it should be composition as Decorator cannot exist without the base component. Composition is stronger that aggregation, it usually means that the object takes ownership of its components. This is not the case in this situation because a decorator doesn't own a decorated object. Moreover you could remove the decorator without a need to remove the decorated

How to filter an elasticsearch global aggregation?

此生再无相见时 提交于 2019-12-05 02:22:49
What I want to achieve: I want my "age" aggregation to not be filtered by the query filter and I want to be able to apply filters to it. So if I start with this query: { "query":{ "filtered":{ "filter":{ "terms":{ "family_name":"Brown" } } //filter_1 } }, "aggs":{ "young_age":{ "filter":{ "range":{ "lt":40, "gt":18 } //filter_2 }, "aggs":{ "age":{ "terms":{ "field":"age" } } } } } } My aggregation "young_age" will be filtered by both filter_1 and filter_2. I don't want my aggregation to be filtered by filter_1. As I was looking into the documentation, I thought global aggregation would solve

grouping every N values

本秂侑毒 提交于 2019-12-05 02:09:02
问题 I have a table like this in PostgreSQL. I want to perform aggregation functions like mean and max for every 16 records based on ID (which is primary key). For example I have to calculate mean value for first 16 records and second 16 records and so on. +-----+------------- | ID | rainfall | +-----+----------- | | 1 | 110.2 | | 2 | 56.6 | | 3 | 65.6 | | 4 | 75.9 | +-----+------------ 回答1: The 1st approach that comes to mind is to use row_number() to annotate the table, then group by blocks of

Managed COM aggregation

混江龙づ霸主 提交于 2019-12-04 23:50:44
问题 It is my understanding building a COM object aggregating an existing COM object implies implementing redirection logic in the IUnknown.QueryInterface method of the outer object. The question I have is how to do that if the object you are building is managed. On managed objects IUnknown is not explicitly implemented COM Interop does it for you. So how do I tell COM Interop that the object I build is an aggregation of another COM object? So far the only way I found is to implement all the

Does storing aggregated data go against database normalization?

一个人想着一个人 提交于 2019-12-04 17:15:49
On sites like SO, I'm sure it's absolutely necessary to store as much aggregated data as possible to avoid performing all those complex queries/calculations on every page load. For instance, storing a running tally of the vote count for each question/answer, or storing the number of answers for each question, or the number of times a question has been viewed so that these queries don't need to be performed as often. But does doing this go against db normalization, or any other standards/best-practices? And what is the best way to do this, e.g., should every table have another table for

Applying calculation per groups within R dataframe

て烟熏妆下的殇ゞ 提交于 2019-12-04 16:56:13
I have data like that: object category country 495647 1 RUS 477462 2 GER 431567 3 USA 449136 1 RUS 367260 1 USA 495649 1 RUS 477461 2 GER 431562 3 USA 449133 2 RUS 367264 2 USA ... where one object appears in various (category, country) pairs and countries share a single list of categories. I'd like to add another column to that, which would be a category weight per country - the number of objects appearing in a category for a category, normalized to sum up to 1 within a country (summation only over unique (category, country) pairs). I could do something like: aggregate(df$object, list(df

Elasticsearch terms aggregation by strings in an array

霸气de小男生 提交于 2019-12-04 16:36:02
问题 How can I write an Elasticsearch terms aggregation that splits the buckets by the entire term rather than individual tokens? For example, I would like to aggregate by state, but the following returns new, york, jersey and california as individual buckets, not New York and New Jersey and California as the buckets as expected: curl -XPOST "http://localhost:9200/my_index/_search" -d' { "aggs" : { "states" : { "terms" : { "field" : "states", "size": 10 } } } }' My use case is like the one

Producing histogram Map for IntStream raises compile-time-error

a 夏天 提交于 2019-12-04 15:29:31
I'm interested in building a Huffman Coding prototype. To that end, I want to begin by producing a histogram of the characters that make up an input Java String . I've seen many solutions on SO and elsewhere (e.g: here that depend on using the collect() methods for Stream s as well as static imports of Function.identity() and Collectors.counting() in a very specific and intuitive way. However, when using a piece of code eerily similar to the one I linked to above: private List<HuffmanTrieNode> getCharsAndFreqs(String s){ Map<Character, Long> freqs = s.chars().collect(Collectors.groupingBy

Sumproduct using Django's aggregation

橙三吉。 提交于 2019-12-04 14:09:31
Question Is it possible using Django's aggregation capabilities to calculate a sumproduct? Background I am modeling an invoice, which can contain multiple items. The many-to-many relationship between the Invoice and Item models is handled through the InvoiceItem intermediary table. The total amount of the invoice— amount_invoiced —is calculated by summing the product of unit_price and quantity for each item on a given invoice. Below is the code that I'm currently using to accomplish this, but I was wondering if there is a better way to handle this using Django's aggregation capabilities .

Group by column “grp” and compress DataFrame - (take last not null value for each column ordering by column “ord”)

血红的双手。 提交于 2019-12-04 09:46:30
Assuming I have the following DataFrame: +---+--------+---+----+----+ |grp|null_col|ord|col1|col2| +---+--------+---+----+----+ | 1| null| 3|null| 11| | 2| null| 2| xxx| 22| | 1| null| 1| yyy|null| | 2| null| 7|null| 33| | 1| null| 12|null|null| | 2| null| 19|null| 77| | 1| null| 10| s13|null| | 2| null| 11| a23|null| +---+--------+---+----+----+ here is the same sample DF with comments, sorted by grp and ord : scala> df.orderBy("grp", "ord").show +---+--------+---+----+----+ |grp|null_col|ord|col1|col2| +---+--------+---+----+----+ | 1| null| 1| yyy|null| | 1| null| 3|null| 11| # grp:1 - last