reddit

Angular - Correctly using RXJS expand operator to make recursive http calls

我的未来我决定 提交于 2019-12-01 10:36:43
I am attempting to make recursive http calls to Reddit's API using a value from the previous call. The problem is that the previous call is not finished before the next one starts, so duplicate calls are being made. The "after" value should be updated for every call until the "after" value is undefined. I found this related post and have attempted to use the solution described, but I can't figure out how to make sure the previous call is finished before making the next call. Below is my actual code: private getSavedPostsForAuthenticatedUser(username: string, after: string, userPosts: any) {

Baking-Pi Challenge - Understanding & Improving

心不动则不痛 提交于 2019-12-01 00:26:50
I spent some time yesterday writing the solution for this challenge published on Reddit , and was able to get through it without cheating, but I was left with a couple of questions. Reference material here . This is my code. (ns baking-pi.core (:import java.math.MathContext)) (defn modpow [n e m] (.modPow (biginteger n) (biginteger e) (biginteger m))) (defn div [top bot] (with-precision 34 :rounding HALF_EVEN (/ (bigdec top) (bigdec bot)))) (defn pow [n e] (.pow (bigdec n) (bigdec e) MathContext/DECIMAL128)) (defn round ([n] (.round (bigdec n) MathContext/DECIMAL128)) ([n & args] (->> [n args]

Baking-Pi Challenge - Understanding & Improving

微笑、不失礼 提交于 2019-11-30 19:05:12
问题 I spent some time yesterday writing the solution for this challenge published on Reddit, and was able to get through it without cheating, but I was left with a couple of questions. Reference material here. This is my code. (ns baking-pi.core (:import java.math.MathContext)) (defn modpow [n e m] (.modPow (biginteger n) (biginteger e) (biginteger m))) (defn div [top bot] (with-precision 34 :rounding HALF_EVEN (/ (bigdec top) (bigdec bot)))) (defn pow [n e] (.pow (bigdec n) (bigdec e)

I am unable to use Reddit's APIs to log in

痴心易碎 提交于 2019-11-30 09:02:34
问题 I'm trying to use the Reddit API to do some stuff. I have everything working but changing pages and logging in. I need to login to use my program, I know how to use the cookie I get, but I just can't manage to login. Here's the code: public static Login POST(URL url, String user, String pw) throws IOException { String encodedData = URLEncoder.encode("api_type=json&user=" + user +"&passwd="+pw, "UTF-8"); HttpURLConnection ycConnection = null; ycConnection = (HttpURLConnection) url

Ruby - iterate over parsed JSON

徘徊边缘 提交于 2019-11-30 08:01:02
问题 I'm trying to iterate of a parsed JSON response from reddit's API. I've done some googling and seems others have had this issue but none of the solutions seem to work for me. Ruby is treating ['data]['children] as indexes and that's causing the error but I'm just trying to grab these values from the JSON. Any advice? My code: require "net/http" require "uri" require "json" uri = URI.parse("http://www.reddit.com/user/brain_poop/comments/.json") response = Net::HTTP.get_response(uri) data =

urllib2 HTTP error 429

那年仲夏 提交于 2019-11-30 07:10:32
So I have a list of sub-reddits and I'm using urllib to open them. As I go through them eventually urllib fails with: urllib2.HTTPError: HTTP Error 429: Unknown Doing some research I found that reddit limits the ammount of requests to their servers by IP: Make no more than one request every two seconds. There's some allowance for bursts of requests, but keep it sane. In general, keep it to no more than 30 requests in a minute. So I figured I'd use time.sleep() to limit my requests to one page each 10 seconds. This ends up failing just as well. The quote above is grabbed from the reddit API

What is the alphanumeric id in a reddit URL?

為{幸葍}努か 提交于 2019-11-30 05:07:44
What is the 7n5lu in the reddit URL http://www.reddit.com/r/reddit.com/comments/7n5lu/man_can_fly_if_you_watch_one_video_in_2 ...and how is it generated? Update: @Gerald, I initially thought this is some obfuscation of the id. It is just doing the conversion from integer to a more compact representation. I am thinking, why is this being done? why not use the original integer itself!! >>> to36(4000) '334' >>> to36(4001) '335' Gerald Kaszuba The reddit source code is available ! Here is what I found for generating that string: def to_base(q, alphabet): if q < 0: raise ValueError, "must supply a

如果想进入一家大公司面试,你会怎么做?

谁说我不能喝 提交于 2019-11-29 23:37:31
简评: 有个人为了获得 Reddit 的工作机会,先写了一篇自荐的博客,然后通过社会工程学的方法给 Reddit CEO 投了一则 Facebook 的广告,并且成功获得了面试机会。而这一切只花了 10.62 美元,比请人做一份简历还便宜 两年前,Chris Seline 从一家初创公司辞职,准备寻找新的工作机会。 Chris 不想走传统的求职道路,他打算另辟蹊径,于是想出了一个计划 —— 写一篇自荐博客,使之引起公司里某个特定的人的注意,然后让他邀请自己前去工作。 Chris 想去的公司是 Reddit。 计划开始了。首先 Chris 知道 Reddit 的 CEO 是一个技术型创始人,所以 Chris 投入了大量在这篇 自荐博客 上,以此来打动 CEO。 现在的问题是,如何让 Reddit CEO 看到我的博文? Chris 的第一个想法是 Email,但是直接发邮件也未免太无聊了吧。不如想办法让这篇博文出现在 Hacker News 首页,虽然 Chris 确信 Reddit CEO 仍然经常浏览 Hacker News ,但是这篇自荐博文可不一定能引起足够多的人感兴趣,从而能够进入首页。 后来他想起来一个初创公司常用的方法 —— 通过 Facebook 广告让潜在客户看到公司的产品。所以 Chris 决定将这篇自荐博文当成自己的产品,通过 Facebook 广告定向投放到

Getting more than 100 search results with PRAW?

筅森魡賤 提交于 2019-11-29 20:05:33
问题 I'm using the following code to obtain reddit search results with PRAW 4.4.0: params = {'sort':'new', 'time_filter':'year'} return reddit.subreddit(subreddit).search('', **params) I'd like to scrape an indefinite amount of posts from the subreddit, for a period of up to a year. Reddit's search functionality (and correspondingly, their API) achieves this with the 'after' parameter. However, the above search function doesn't accept 'after' as a parameter. Is there a way to use PRAW's .search()

urllib2 HTTP error 429

廉价感情. 提交于 2019-11-29 08:48:05
问题 So I have a list of sub-reddits and I'm using urllib to open them. As I go through them eventually urllib fails with: urllib2.HTTPError: HTTP Error 429: Unknown Doing some research I found that reddit limits the ammount of requests to their servers by IP: Make no more than one request every two seconds. There's some allowance for bursts of requests, but keep it sane. In general, keep it to no more than 30 requests in a minute. So I figured I'd use time.sleep() to limit my requests to one page