python-3.6

Transform string to f-string

﹥>﹥吖頭↗ 提交于 2019-12-05 02:28:58
How do I transform a classic string to an f-string ? : variable = 42 user_input = "The answer is {variable}" print(user_input) The answer is {variable} f_user_input = # Here the operation to go from a string to an f-string print(f_user_input) The answer is 42 An f-string is syntax , not an object type. You can't convert an arbitrary string to that syntax, the syntax creates a string object, not the other way around. I'm assuming you want to use user_input as a template, so just use the str.format() method on the user_input object: variable = 42 user_input = "The answer is {variable}" formatted

How to run one airflow task and all its dependencies?

ぐ巨炮叔叔 提交于 2019-12-05 01:39:14
I suspected that airflow run dag_id task_id execution_date would run all upstream tasks, but it does not. It will simply fail when it sees that not all dependent tasks are run. How can I run a specific task and all its dependencies? I am guessing this is not possible because of an airflow design decision, but is there a way to get around this? You can run a task independently by using -i/-I/-A flags along with the run command. But yes the design of airflow does not permit running a specific task and all its dependencies. You can backfill the dag by removing non-related tasks from the DAG for

Can you overload the Python 3.6 f-string's “operator”?

放肆的年华 提交于 2019-12-05 01:35:25
In Python 3.6, you can use f-strings like: >>> date = datetime.date(1991, 10, 12) >>> f'{date} was on a {date:%A}' '1991-10-12 was on a Saturday' I want to overload the method receiving the '%A' above. Can it be done? For example, if I wanted to write a dumb wrapper around datetime , I might expect this overloading to look something like: class MyDatetime: def __init__(self, my_datetime, some_other_value): self.dt = my_datetime self.some_other_value = some_other_value def __fstr__(self, format_str): return ( self.dt.strftime(format_str) + 'some other string' + str(self.some_other_value ) Yes ,

How to create a self-referential Python 3 Enum?

谁说胖子不能爱 提交于 2019-12-05 00:28:40
Can I create an enum class RockPaperScissors such that ROCK.value == "rock" and ROCK.beats == SCISSORS , where ROCK and SCISSORS are both constants in RockPaperScissors ? Enum members are instances of the type. This means you can just use a regular property: from enum import Enum class RockPaperScissors(Enum): Rock = "rock" Paper = "paper" Scissors = "scissors" @property def beats(self): lookup = { RockPaperScissors.Rock: RockPaperScissors.Scissors, RockPaperScissors.Scissors: RockPaperScissors.Paper, RockPaperScissors.Paper: RockPaperScissors.Rock, } return lookup[self] By picking the order

What are formatted string literals in Python 3.6?

China☆狼群 提交于 2019-12-04 22:49:20
One of the features of Python 3.6 are formatted strings. This SO question (String with 'f' prefix in python-3.6) is asking about the internals of formatted string literals, but I don't understand the exact use case of formatted string literals. In which situations should I use this feature? Isn't explicit better than implicit? xmcp Simple is better than complex. So here we have formatted string. It gives the simplicity to the string formatting, while keeping the code explicit (comprared to other string formatting mechanisms). title = 'Mr.' name = 'Tom' count = 3 # This is explicit but complex

In-place custom object unpacking different behavior with __getitem__ python 3.5 vs python 3.6

对着背影说爱祢 提交于 2019-12-04 21:24:09
问题 a follow-up question on this question: i ran the code below on python 3.5 and python 3.6 - with very different results: class Container: KEYS = ('a', 'b', 'c') def __init__(self, a=None, b=None, c=None): self.a = a self.b = b self.c = c def keys(self): return Container.KEYS def __getitem__(self, key): if key not in Container.KEYS: raise KeyError(key) return getattr(self, key) def __str__(self): # python 3.6 # return f'{self.__class__.__name__}(a={self.a}, b={self.b}, c={self.c})' # python 3.5

Install pythonnet on Ubuntu 18.04, Python 3.6.7 64-bit, Mono 5.16 fails

柔情痞子 提交于 2019-12-04 20:28:18
I would like to install pythonnet on Ubuntu, but it fails. That's what I tried so far: /usr/bin/python3 -m pip install -U pythonnet --user Error: Collection pythonnet Using cached https://files.pythonhosted.org/packages/89/3b/a22cd45b591d6cf490ee8b24d52b9db1f30b4b478b64a9b231c53474731e/pythonnet-2.3.0.tar.gz Building wheels for collected packages: pythonnet Running setup.py bdist_wheel for pythonnet ... error Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-sv3ax85u/pythonnet/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f

Memory-efficient large dataset streaming to S3

孤人 提交于 2019-12-04 17:32:50
I am trying to copy over S3 large dataset (larger than RAM) using SQL alchemy. My constraints are: I need to use sqlalchemy I need to keep memory pressure at lowest I don't want to use the local filsystem as intermediary step to send data to s3 I just want to pipe data from a DB to S3 in a memory efficient way. I can to do it normal with data sets (using below logic) but with larger dataset I hit a buffer issue. The first problem I solved is that executing a query usually buffers the result in memory. I use the fetchmany() method. engine = sqlalchemy.create_engine(db_url) engine.execution

Python Pandas: TypeError: unsupported operand type(s) for +: 'datetime.time' and 'Timedelta'

独自空忆成欢 提交于 2019-12-04 12:14:24
问题 I am attempting to add two series in a dataframe in pandas with the first series being a 24-hr time value (e.g. 17:30) exported from an excel file and the second series being a series of the same length in Timedelta format converted from floats with the 'pd.Timedelta' command. The desired resulting third column would be a 24-hr time regardless of day change (e.g. 22:00 + 4 hours = 02:00). I created the Delta series like this: delta = pd.Series(0 for x in range(0, len(df.Time_In_Hours))) for j

IllegalArgumentException with Spark collect() on Jupyter

大憨熊 提交于 2019-12-04 09:48:04
I have a setup with Jupyter 4.3.0, Python 3.6.3 (Anaconda), and PySpark 2.2.1. The following example will fail when run through Jupyter: sc = SparkContext.getOrCreate() rdd = sc.parallelize(['A','B','C']) rdd.collect() Below is the complete stack trace: --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-35-0d4a2ca9edf4> in <module>() 2 3 rdd = sc.parallelize(['A','B','C']) ----> 4 rdd.collect() /usr/local/Cellar/apache-spark/2.2.1/libexec/python/pyspark/rdd.py in collect(self) 807 """ 808 with