700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 特征选择算法----Relief算法python实现

特征选择算法----Relief算法python实现

时间:2023-02-08 18:41:17

相关推荐

特征选择算法----Relief算法python实现

特征选择算法----Relief算法

特征选择算法分为:

(1)封装式算法:遗传算法,适用于处理大规模的数据,具有全局搜索能力强且不易陷入局部最优解。

(2)嵌入式算法:需要嵌入机器学习算法,往往降维效果比较好,使用于高纬度,数据量大的特征集

(3)过滤式特征选择算法:独立于机器学习算法,算法往往耗时短,效率高

relief算法原理

原理:

根据信号特征于分类标签的相关性,给特征向量赋予权值,并根据权值筛选出对分类效果影响较大的特征子集。

具体算法实现:随机在样本集中选择一个样本作为sample样本,在和sample相同类中选择最近的样本nearHit,在于样本sample不同类中选择最近的样本nearMiss(距离通常默认使用偶=欧式距离,当然也可以使用曼哈顿等距离),如果样本sample在特征上与nearHit的距离小于(或大于)与nearMiss的距离,表明该特征对信号的分类效果有益(或引起了负面作用)则增加(或减小)该特征的权值W

Wi=Wi-1-(samplei-nearHiti)^2

+(samplei-nearMissi)^2

缺点:(1)当样本数据量较小时,其基于统计相关性的计算过程容易收到噪声信号的影响,得到的特征权值会有误差。

(2)没有充分考虑特征之间的联系,因此选择的特征集合存在冗余

代码实现(python3.7)

import pandas as pdimport numpy as npimport numpy.linalg as laimport random# 异常类class FilterError:passclass Filter:def __init__(self, data_df, sample_rate, t, k):"""#:param data_df: 数据框(字段为特征,行为样本):param sample_rate: 抽样比例:param t: 统计量分量阈值:param k: 选取的特征的个数"""self.__data = data_dfself.__feature = data_df.columnsself.__sample_num = int(round(len(data_df) * sample_rate))self.__t = tself.__k = k# 数据处理(将离散型数据处理成连续型数据,比如字符到数值)def get_data(self):new_data = pd.DataFrame()for one in self.__feature[:-1]:col = self.__data[one]if (str(list(col)[0]).split(".")[0]).isdigit() or str(list(col)[0]).isdigit() or (str(list(col)[0]).split('-')[-1]).split(".")[-1].isdigit():new_data[one] = self.__data[one]# print '%s 是数值型' % oneelse:# print '%s 是离散型' % onekeys = list(set(list(col)))values = list(range(len(keys)))new = dict(zip(keys, values))new_data[one] = self.__data[one].map(new)new_data[self.__feature[-1]] = self.__data[self.__feature[-1]]return new_data# 返回一个样本的猜中近邻和猜错近邻(返回与样本点同类的最近的样本和与样本点不同类的最近样本)def get_neighbors(self, row):df = self.get_data() #得到样本数数据集row_type = row[df.columns[-1]] #获取样本类型right_df = df[df[df.columns[-1]] == row_type].drop(columns=[df.columns[-1]])#获取与样本类型相同的样本集wrong_df = df[df[df.columns[-1]] != row_type].drop(columns=[df.columns[-1]])#获取与样本不同类的样本集print('与所选样本相同类型的样本向量',right_df)print('与所选样本不同类型的样本向量',wrong_df)aim = row.drop(df.columns[-1])#删除样本类型列(标签列)f = lambda x: eulidSim(np.mat(x), np.mat(aim))right_sim = right_df.apply(f, axis=1)right_sim_two = right_sim.drop(right_sim.idxmin())# print right_sim_two# print right_sim.values.argmax() # np.argmax(wrong_sim)wrong_sim = wrong_df.apply(f, axis=1)# print wrong_sim# print wrong_sim.values.argmax()# print right_sim_two.idxmin(), wrong_sim.idxmin()return right_sim_two.idxmin(), wrong_sim.idxmin()# 计算特征权重def get_weight(self, feature, index, NearHit, NearMiss):data = self.__data.drop(self.__feature[-1], axis=1)row = data.iloc[index]nearhit = data.iloc[NearHit]nearmiss = data.iloc[NearMiss]if (str(row[feature]).split(".")[0]).isdigit() or str(row[feature]).isdigit() or (str(row[feature]).split('-')[-1]).split(".")[-1].isdigit():max_feature = data[feature].max()min_feature = data[feature].min()right = pow(round(abs(row[feature] - nearhit[feature]) / (max_feature - min_feature), 2), 2)wrong = pow(round(abs(row[feature] - nearmiss[feature]) / (max_feature - min_feature), 2), 2)# w = wrong - rightelse:right = 0 if row[feature] == nearhit[feature] else 1wrong = 0 if row[feature] == nearmiss[feature] else 1# w = wrong - rightw = wrong - right# print wreturn w# 过滤式特征选择def relief(self):sample = self.get_data()# print samplem, n = np.shape(self.__data) # m为行数,n为列数score = []sample_index = random.sample(range(0, m), self.__sample_num)print('采样样本索引为 %s ' % sample_index)num = 1for i in sample_index: # 采样次数one_score = dict() #创建一个字典row = sample.iloc[i] #基于索引来选择数据集 选择样本NearHit, NearMiss = self.get_neighbors(row) #获取样本的nearHit和nearMissprint('第 %s 次采样,样本index为 %s,其NearHit行索引为 %s ,NearMiss行索引为 %s' % (num, i, NearHit, NearMiss))for f in self.__feature[0:-1]:w = self.get_weight(f, i, NearHit, NearMiss)one_score[f] = wprint( '特征 %s 的权重为 %s.' % (f, w))score.append(one_score)num += 1f_w = pd.DataFrame(score)print( '采样各样本特征权重如下:')print( f_w)print( '平均特征权重如下:')print( f_w.mean())return( f_w.mean())# 返回最终选取的特征def get_final(self):f_w = pd.DataFrame(self.relief(), columns=['weight'])final_feature_t = f_w[f_w['weight'] > self.__t]print(final_feature_t)final_feature_k = f_w.sort_values('weight').head(self.__k)print(final_feature_k)return final_feature_t, final_feature_k# 几种距离求解def eulidSim(vecA, vecB):return la.norm(vecA - vecB)def cosSim(vecA, vecB):""":param vecA: 行向量:param vecB: 行向量:return: 返回余弦相似度(范围在0-1之间)"""num = float(vecA * vecB.T)denom = la.norm(vecA) * la.norm(vecB)cosSim = 0.5 + 0.5 * (num / denom)return cosSimdef pearsSim(vecA, vecB):if len(vecA) < 3:return 1.0else:return 0.5 + 0.5 * np.corrcoef(vecA, vecB, rowvar=0)[0][1]if __name__ == '__main__':data = pd.read_csv('watermelon3_0_Ch.csv',encoding='ANSI')[['色泽', '根蒂', '敲声', '纹理', '脐部', '触感', '密度', '含糖率', '好瓜']]print(data)f = Filter(data, 1, 0.8, 6)f.relief()# f.get_final()

使用的数据集为

结果截图:

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。