700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > (转)如何将Sklearn数据集Bunch格式转换为Pandas数据集DataFrame?

(转)如何将Sklearn数据集Bunch格式转换为Pandas数据集DataFrame?

时间:2024-03-09 07:15:00

相关推荐

(转)如何将Sklearn数据集Bunch格式转换为Pandas数据集DataFrame?

转载链接:[/article/4362.html]

from sklearn.datasets import load_irisimport pandas as pddata = load_iris()print(type(data)) #输出:<class 'sklearn.utils.Bunch'>

data1 = pd. # Is there a Pandas method to accomplish this?

最佳思路

可以手动使用pd.DataFrame构造函数,提供一个numpy数组(data)和列名的列表(columns)。要将所有内容都放在一个DataFrame中,可以使用np.c_[…]将特征和目标(标签)连接到一个numpy数组中(请注意运算符[]):

import numpy as npimport pandas as pdfrom sklearn.datasets import load_iris# save load_iris() sklearn dataset to iris# if you'd like to check dataset type use: type(load_iris())# if you'd like to view list of attributes use: dir(load_iris())iris = load_iris()# np.c_ is the numpy concatenate function# which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list# and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species']data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],columns= iris['feature_names'] + ['target'])#numpy科学计算工具箱import numpy as np#使用make_classification构造1000个样本,每个样本有20个featurefrom sklearn.datasets import make_classificationX, y = make_classification(1000, n_features=20, n_informative=2, n_redundant=2, n_classes=2, random_state=0)#存为dataframe格式,TypeError: unsupported operand type(s) for +: #'range' and 'range',这里python代码报错如标题,实际是两个range相#加。仍然是python2和python3版本导致的错误。#python2中,range()返回的是list,可以将两个range()直接相加,如#range(5)+range(10) #python3中,range()成了一个class,不可以直接将两个range()直接相加,##需要先加个list,如list(range(5))+list(range(10)) #因为python3中的range()为节省内存,仅仅存储了range()的start,stop,#step这三个元素,其余值使用时一个一个的算,其实就是个迭代器,加上#list()让range()把所有值算出来就可以相加了.from pandas import DataFramedf = DataFrame(np.hstack((X, y[:, None])),columns = list(range(20)) + ["class"])

type(load_iris())sklearn.utils.Bunchdir(load_iris())['DESCR', 'data', 'feature_names', 'target', 'target_names']'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),'target_names': array(['setosa', 'versicolor', 'virginica']

第二种思路

对于scikit-learn中的所有数据集,上文"最佳思路"的解决方案不够通用。例如,它不适用于波士顿住房数据集。我提出了另一种更通用的解决方案。也无需使用numpy。

from sklearn import datasetsimport pandas as pdboston_data = datasets.load_boston()df_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)df_boston['target'] = pd.Series(boston_data.target)df_boston.head()

作为通用函数:

def sklearn_to_df(sklearn_dataset):df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)df['target'] = pd.Series(sklearn_dataset.target)return dfdf_boston = sklearn_to_df(datasets.load_boston())

dataframe转化成array:df=df.values

array转化成dataframe:

import pandas as pd

df = pd.DataFrame(df)

df=df.values.flatten() 需要的时候在末尾加一个flatten() 变成一行的方便统计分析

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。