Information Retrieval Evaluation-排序效果評估 (NDCG)
版本相關資訊:
System version : Windows 10 64-bit
Python version : Python 3.6.0 :: Anaconda 4.3.1 (64-bit)
內文:
NDCG (Normalized Discount Cumulative Gain)
當資料有標記多等級時,可以使用NDCG來評估
算出的NDCG值愈接近1,代表效果越佳
以下為NDCG的計算公式與簡單範例:
i為新排序結果
reli為真實的排序等級
當資料有標記多等級時,可以使用NDCG來評估
算出的NDCG值愈接近1,代表效果越佳
以下為NDCG的計算公式與簡單範例:
i為新排序結果
reli為真實的排序等級
python codes:
"""
程式參考自:
https://gist.github.com/bwhite/3726239
https://gist.github.com/gumption/b54278ec9bab2c0e0472816d1d7663be
差異:新增「 sum (2^rel_i - 1) / log2(i + 1) 」的版本
作者:Jie Dondon
版本:ndcg_dondon_20180201_v2
"""
import numpy as np
def dcg_at_k(r, k, method=0):
"""Score is discounted cumulative gain (dcg)
Relevance is positive real values. Can use binary as the previous methods.
There is a typographical error on the formula referenced in the original definition of this function:
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf
log2(i) should be log2(i+1)
The formulas here are derived from
https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Discounted_Cumulative_Gain
The formulas return the same results when r contains only binary values
>>> r = [2,3,2,3,1,1]
>>> dcg_at_k(r,6,1)
12.674304175856518
>>> r = [3,3,2,2,1,1]
>>> dcg_at_k(r,6,1)
14.951597943562946
Args:
r: Relevance scores (list or numpy array) in rank order
(first element is the most relevant item)
k: Number of results to consider
method: If 0 then sum rel_i / log2(i + 1) [not log2(i)]
If 1 then sum (2^rel_i - 1) / log2(i + 1)
Returns:
Discounted cumulative gain
"""
r = np.asfarray(r)[:k]
if r.size:
if method == 0:
return np.sum(r / np.log2(np.arange(2, r.size + 2)))
elif method == 1 :
return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2)))
else:
raise ValueError('method must in [0,1]')
return 0.
def ndcg_at_k(r, k, method=0):
"""Score is normalized discounted cumulative gain (ndcg)
Relevance is positive real values. Can use binary
as the previous methods.
Example from
2013-Introduction to Information Retrieval Evaluation p.3
(http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf)
↑ 此份文件的公式有誤,導致提供的結果也是錯誤的
2012-微軟亞洲研究院-武威-機器學習及排序學習基礎 p.36
>>> r = [2, 1, 2, 0]
>>> ndcg_at_k(r,4,0)
0.96519546960144276
>>> r = [2,3,2,3,1,1]
0.84768893757694552
Args:
r: Relevance scores (list or numpy array) in rank order
(first element is the most relevant item)
k: Number of results to consider
method: If 0 then sum rel_i / log2(i + 1) [not log2(i)]
If 1 then sum (2^rel_i - 1) / log2(i + 1)
Returns:
Normalized discounted cumulative gain
"""
dcg_max = dcg_at_k(sorted(r, reverse=True), k, method)
if not dcg_max:
return 0.
return dcg_at_k(r, k, method) / dcg_max
使用方式:
r是一個list,list的順序為透過新方法排序的新排序結果,裡面的數字代表真實的等級,分數越高代表等級愈高。
k是要考慮的結果數量。
method為使用不同的公式:NDCG有不同的計算公式,原始作者僅提供兩種,此次修改的程式多提供一種方法,方法編號為1,公式如本文開頭所放的範例圖片。
k是要考慮的結果數量。
method為使用不同的公式:NDCG有不同的計算公式,原始作者僅提供兩種,此次修改的程式多提供一種方法,方法編號為1,公式如本文開頭所放的範例圖片。
r = [2, 1, 2, 0]
ndcg_at_k(r,4,0)
0.96519546960144276
r = [2,3,2,3,1,1]
ndcg_at_k(r,6,1)
0.84768893757694552
沒有留言:
張貼留言