analytics

Finding matches in two files and outputting them

不想你离开。 提交于 2019-12-11 20:07:31
问题 I want to use x[#] from first file and x[#] from second file, I want to see if those two values match, if they do I want to output those, along with several other x[#] values from the second file, which are on the same line. The format the files are in :(but there is millions, and I want to find the pairs in the two files because they all should match up) line 1 data,data,data,data line 2 data,data,data,data data from file 1: (N'068D556A1A665123A6DD2073A36C1CAF', N

Omniture SiteCatlayst Tracking: Error when calling s.tl() from within a function with jQuery without binding to click event

泪湿孤枕 提交于 2019-12-11 19:43:28
问题 I want to track when a user submits a form using Omniture's "Custom Link Tracking". This feature utilizes the built-in function s.tl() . A typical setup looks like this: $('a#submit').click(function () { s.trackExternalLinks = false; s.linkTrackVars = 'events,prop1'; s.linkTrackEvents = s.events = 'event1'; s.prop1 = s.pageName; s.tl(this, 'o', 'Form Submitted'); }); This code works fine when the example link ( <a id="submit"> ) is clicked. Say, instead, we want to call a function to trigger

Is there a way to perform a ranking in sql without using either analytic functions or correlated subqueries?

走远了吗. 提交于 2019-12-11 19:09:25
问题 Consider: select row_number() over (partition by product_category order by price desc) ARank,* from Product Now: instead of using an analytic function such as _row_number()_ or rank() - and without using correlated subqueries is there a way to obtain the same results in standard sql? Note: there is an excellent Q&A on how to emulate the analytic functions : Implement Rank without using analytic function. However all of the answers use correlated subqueries . The motivation is the following: I

Track a Google Analytics event that occurs when a user isn't on the site

不羁岁月 提交于 2019-12-11 18:53:36
问题 Our site enables users to send money to us via a bank transfer. They 'initiate' the transfer and get a reference code. Then they log in to their bank account and send the money using the reference code. We track when a user promises to send the money but I'm not sure how to track when that money arrives. They won't be logged into the site when it happens which makes it complicated. Is there a way to do this? Thanks in advance! 回答1: It is possible to track events on behalf of a recent user,

Oracle - grouping between pairs of records

半城伤御伤魂 提交于 2019-12-11 18:05:38
问题 I have a logging table that contains data that looks like this: ID MSG DATE --------------------------------------------- 1 TEst 2010-01-01 09:00:00 2 Job Start 2010-01-01 09:03:00 3 Do something 2010-01-01 09:03:10 4 Do something else 2010-01-01 09:03:12 5 Do something 2010-01-01 09:04:19 6 Job End 2010-01-01 09:06:30 7 Job Start 2010-01-01 09:18:03 8 Do something 2010-01-01 09:18:17 9 Do other thing 2010-01-01 09:19:48 10 Job End 2010-01-01 09:20:27 It contains (among other things) messags

click paths in logs or analytics?

孤人 提交于 2019-12-11 17:32:01
问题 I'm trying to see the path my users take when clicking thru a web app I have. I've got logs, awstats and webalizer on the server-side, and I'm looking to install some sort of analytical product. I don't see any breadcrumb/click path data in my log files. Am I missing it? Barring that, what analytical products (Yahoo, Google, etc) can do this? Thanks. 回答1: You can try GAVisual a small tool for Google Analytics which can show you users paths with waves (page by page) visualisation. It uses GA

Algorithm for finding trends in data? [closed]

白昼怎懂夜的黑 提交于 2019-12-11 14:14:23
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 6 years ago . I'm looking for an algorithm that is able to find trends in large amounts of data. For instance, if one is given time t and a variable x , (t,x) , and given input such as {(1,1), (2,4), (3,9), (4,16)} , it should be able to figure out that the value of x for t=5 is 25. How is this

How can I iterate through multiple dataframes to select a column in each in python?

╄→尐↘猪︶ㄣ 提交于 2019-12-11 13:51:32
问题 For my project I'm reading in a csv file with data from every State in the US. My function converts each of these into a separate Dataframe as I need to perform operations on each State's information. def RanktoDF(csvFile): df = pd.read_csv(csvFile) df = df[pd.notnull(df['Index'])] # drop all null values df = df[df.Index != 'Index'] #Drop all extra headers df= df.set_index('State') #Set State as index return df I apply this function to every one of my files and return the df with a name from

How to track how many automatic-upgrade downloads my Greasemonkey script has?

送分小仙女□ 提交于 2019-12-11 10:22:26
问题 I have a page with a series of links to Greasemonkey userscripts. The userscripts have @updateURL parameters set, so that when I upload a new version to the page people who have the script installed either have their version of it updated automatically or are asked if they want to upgrade. This all works fine. I have Google analytics installed on the hosting page, so I can see how many hits the page gets, but what I would like to know is how many people are getting the updates. The normal

Efficient way to generate 2 billion rows in SQL Server 2014 Developer

匆匆过客 提交于 2019-12-11 09:53:13
问题 Long story short; I am testing a system to purge entries from a table over a network connection, and the functionality is predicted to handle over 2 billion entries at most. I need to stress test this to be certain. Here's my test script (At best it's able to generate 9.8 million in ten minutes.) DECLARE @I INT=0 WHILE @I <2000000001 BEGIN INSERT INTO "Table here" VALUES(@I) SET @I=@I+1 END Can anyone suggest anything, or give me an idea what the upper limits of my test environment might be