I have made some experiments with logistic regression in R, python statmodels and sklearn. While the results given by R and statmodels agree, there is some discrepency with
Although this post is old, I wanted to give you a solution. In your post you are comparing apples with oranges. In your R code, you are estimating "balance, income, and student" on "default". In your Python code, you are only estimating "balance and income" on "default". Of course, you cannot get the same estimates. Also the differences cannot be attributed to feature scaling, as logistic regression usually does not need it in comparison to kmeans.
You are right to set a high C, so that there is no regularization. If you want to have the same output as in R, you have to change the solver to "newton-cg". Different solvers can give different results but they still yield the same objective value. As long as your solver converge everything will be okay.
Here's the code that give you the same estimates like in R and Statsmodels:
import pandas as pd
from sklearn.linear_model import LogisticRegression
from patsy import dmatrices #
import numpy as np
# data is available here
Default = pd.read_csv('https://d1pqsl2386xqi9.cloudfront.net/notebooks/Default.csv', index_col=0)
#
Default['default']=Default['default'].map({'No':0, 'Yes':1})
Default['student']=Default['student'].map({'No':0, 'Yes':1})
# use dmatrices to get data frame for logistic regression
y, X = dmatrices('default ~ balance+income+C(student)',
Default,return_type="dataframe")
y = np.ravel(y)
# fit logistic regression
model = LogisticRegression(C = 1e6, fit_intercept=False, solver = "newton-cg", max_iter=10000000)
model = model.fit(X, y)
# examine the coefficients
pd.DataFrame(zip(X.columns, np.transpose(model.coef_)))