暂无图片
暂无图片
暂无图片
暂无图片
暂无图片
Evolutionary_Many-Task_Optimization_Based_on_Multisource_Knowledge_Transfer.pdf
53
15页
0次
2025-06-14
免费下载
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 26, NO. 2, APRIL 2022 319
Evolutionary Many-Task Optimization Based on
Multisource Knowledge Transfer
Zhengping Liang , Xiuju Xu , Ling Liu, Yaofeng Tu, and Zexuan Zhu , Senior Member, IEEE
Abstract—Multitask optimization aims to solve two or more
optimization tasks simultaneously by leveraging intertask knowl-
edge transfer. However, as the number of tasks increases to
the extent of many-task optimization, the knowledge trans-
fer between tasks encounters more uncertainty and challenges,
thereby resulting in degradation of optimization performance.
To give full play to the many-task optimization framework and
minimize the potential negative transfer, this article proposes
an evolutionary many-task optimization algorithm based on a
multisource knowledge transfer mechanism, namely, EMaTO-
MKT. Particularly, in each iteration, EMaTO-MKT determines
the probability of using knowledge transfer adaptively according
to the evolution experience, and balances the self-evolution within
each task and the knowledge transfer among tasks. To perform
knowledge transfer, EMaTO-MKT selects multiple highly simi-
lar tasks in terms of maximum mean discrepancy as the learning
sources for each task. Afterward, a knowledge transfer strategy
based on local distribution estimation is applied to enable the
learning from multiple sources. Compared with the other state-
of-the-art evolutionary many-task algorithms on benchmark test
suites, EMaTO-MKT shows competitiveness in solving many-task
optimization problems.
Index Terms—Evolutionary many-task optimization (EMaTO),
local distribution estimation, maximum mean discrepancy
(MMD), multisource knowledge transfer.
Manuscript received October 31, 2020; revised February 21, 2021 and
June 2, 2021; accepted July 25, 2021. Date of publication August 2, 2021;
date of current version March 31, 2022. This work was supported
in part by the National Natural Science Foundation of China under
Grant 61871272 and Grant 62001300; in part by the National Natural
Science Foundation of Guangdong, China, under Grant 2020A1515010479,
Grant 2021A1515011911, and Grant 2021A1515011679; in part by the
Guangdong Provincial Key Laboratory under Grant 2020B121201001;
in part by the Shenzhen Fundamental Research Program under
Grant 20200811181752003 and Grant JCYJ20190808173617147; and in part
by the BGI-Research Shenzhen Open Funds under Grant BGIRSZ20200002.
This article was recommended by M. Zhang. (Corresponding authors:
Ling Liu; Zexuan Zhu.)
Zhengping Liang, Xiuju Xu, and Ling Liu are with the College of Computer
Science and Software Engineering, Shenzhen University, Shenzhen 518060,
China (e-mail: liangzp@szu.edu.cn; xuxiuju2018@e-mail.szu.edu.cn;
liulingcs@szu.edu.cn).
Yaofeng Tu is with Central Research and Development Institute, ZTE
Corporation, Shenzhen 518057, China (e-mail: tu.yaofeng@zte.com.cn).
Zexuan Zhu is with the College of Computer Science and Software
Engineering, Shenzhen University, Shenzhen 518060, China, also with
Shenzhen Pengcheng Laboratory, Shenzhen 518055, China, and also
with the Guangdong Provincial Key Laboratory of Brain-Inspired
Intelligent Computation, Southern University of Science and Technology,
Shenzhen 518055, China (e-mail: zhuzx@szu.edu.cn).
This article has supplementary material provided by the
authors and color versions of one or more figures available at
https://doi.org/10.1109/TEVC.2021.3101697.
Digital Object Identifier 10.1109/TEVC.2021.3101697
I. INTRODUCTION
E
VOLUTIONARY algorithms (EAs) are population-based
optimization algorithms capable of obtaining multiple
solutions of a target problem in a single run [1]–[3]. They
have achieved widely successes in various complex applica-
tion problems [4]–[7]. Traditional EAs tend to solve one single
problem from scratch by assuming zero prior knowledge.
However, since complex real-world optimization problems sel-
dom appear in isolation, knowledge learned from previous
optimization exercises or related problems can be exploited
to facilitate the solution of the target problems. Inspired by
the parallel processing of multiple problems in human brain,
Gupta et al. [8], [9] proposed a paradigm, namely, evolutionary
multitask optimization (EMTO), to solve multiple optimization
problems simultaneously. Compared with the traditional evo-
lutionary single-task optimization, EMTO can achieve better
performance in solving correlated optimization problems by
leveraging knowledge transfer among the problems [10]–[14].
Nevertheless, as the number of optimization tasks increases to
the extent of many-task optimization (MaTO) [15] (the num-
ber of tasks exceeds three), the majority of EMTO algorithms
face big challenges in computational resource allocation, larger
scale knowledge transfer, and task selection for knowledge
transfer [16]–[18]. More specifically, in MaTO, more efforts
should be put into balancing the computational budgets allo-
cated to the intratask optimization and intertask knowledge
transfer. New knowledge transfer mechanism is required to
enable the efficient knowledge transfer among a larger num-
ber of tasks, where proper selection of participant tasks is the
key to the efficiency of knowledge transfer.
A few specific evolutionary MaTO (EMaTO) algorithms
have been proposed to solve the aforementioned issues.
For example, GMFEA [16] uses a clustering method to
choose task for knowledge transfer. Explicit EMT algo-
rithm (EEMTA) [19] performs task selection for knowledge
transfer via feedback-based credit allocation method. SaEF-
AKT [20] adopts the Kullback–Leibler divergence (KLD)
and pheromone-based method to identify tasks for knowledge
transfer. Many-task EA (MaTEA) [17] transfers knowledge
across the tasks selected according to the feedback information
of the evolutionary process and KLD. To allocate computa-
tional resource, MaTEA also introduces a fixed probability
to control the intratask optimization and knowledge transfer
among tasks. EBS [15] scales up the knowledge transfer by
concatenating offspring to share the knowledge of all tasks.
The existing EMaTO algorithms have made a substantial
1089-778X
c
2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: ZTE Corporation (One Dept). Downloaded on June 08,2025 at 03:36:00 UTC from IEEE Xplore. Restrictions apply.
320 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 26, NO. 2, APRIL 2022
progress in solving a portion of the challenges of MaTO, yet
there remains much room for a more comprehensive solution
that can take all the challenges into consideration.
To handle MaTO problems more efficiently, this article
proposes an EMaTO algorithm with multisource knowledge
transfer, namely, EMaTO-MKT. Particularly, the multisource
knowledge transfer mechanism consists of an adaptive mat-
ing probability (AMP) strategy, a maximum mean discrepancy
(MMD) [21]-based task selection (MTS) strategy, and a
local distribution estimation-based knowledge transfer (LEKT)
strategy. The AMP strategy estimates the current evolution
trend of each task by learning the experience in the pro-
cess of evolution, and calculates the probability of generating
offspring for each task, which is conducive to the balance
of self-evolution within each task and the knowledge trans-
fer among tasks. The MTS strategy uses MMD to calculate
the difference between the decision variable distributions of
different task populations, and selects appropriate tasks to
participate in knowledge transfer and relieve negative trans-
fer [22]. The LEKT strategy supports knowledge transfer
across any number of tasks. The union set of each task popula-
tion participating in knowledge transfer is first divided by the
clustering method, and then a probability model is constructed
for each subpopulation through distribution estimation [23],
based on which the offspring individuals are generated to
accelerate the convergence and maintain the diversity of
the population. To verify the effectiveness of EMaTO-MKT,
EMaTO-MKT is compared with the other state-of-the-art
EMaTO algorithms on two sets of single-objective MaTO
problems and two sets of multiobjective MaTO problems.
EMaTO-MKT shows good competitiveness in the compari-
son studies. The contributions of this article are highlighted as
follows.
1) The AMP strategy introduces adaptive frequency of
knowledge transfer on the basis of evolutionary expe-
rience, which leads to better computational resource
allocation and population convergence.
2) The MTS strategy based on MMD serves as a good
solution for task selection in knowledge transfer and the
experimental results demonstrate that it can substantially
reduce negative transfer.
3) To the best of our knowledge, the LEKT strategy rep-
resents the first attempt to enable knowledge transfer
of arbitrary number of tasks. The strategy can take full
advantage of involving more tasks in MaTO.
The remainder of this article is structured as follows.
Section II presents the preliminaries and literature review
on related work. Section III details the proposed algorithm.
Section IV describes the experiment study. Finally, Section V
concludes this article and discusses the potential future work.
II. P
RELIMINARIES AND LITERATURE REVIEW
To facilitate the understanding of the proposed EMaTO-
MKT, the preliminaries of multitask optimization (MTO) and
the literature review on related work are provided in this
section.
A. Multitask Optimization
MTO [24]–[26] refers to the simultaneous optimization of
multiple self-contained tasks or problems by exploiting syn-
ergies existing among them. It is closely related to transfer
learning [22] and multitask learning [27]. In machine learn-
ing, the optimization of the learning model usually calls for
a large volume of data. If the knowledge of other models in
the related learning tasks can be reused, we can save efforts in
data collection and learning process [28]. Accordingly, knowl-
edge transfer is introduced in transfer learning and multitask
learning. Transfer learning uses the knowledge obtained from
one or more source tasks to improve the learning performance
of the target task, whereas multitask learning establishes
knowledge transfer among different tasks of equal priority
aiming at improving the learning performance of all tasks
at the same time. Extending the knowledge transfer princi-
ple of transfer learning and multitask learning to the field of
optimization leads to MTO, which focuses more on exploit-
ing shared knowledge to improve problem solving rather than
learning [29].
Without loss of generality, a conventional optimization
problem can be defined as follows:
min F(x) = min
(
f
1
(x), f
2
(x),...,f
M
(x)
)
subject to : x R
D
(1)
where x = (x
1
, x
2
,...,x
D
) denotes a D-dimension decision
variable in R, f
i
(x) indicates the ith objective function, and M
is the number of objective functions. The problem is called a
single-objective optimization (SOO) problem if M = 1. If M >
1, the problem is referred to as a multiobjective optimization
(MOO) problem. In the MOO problem, a solution x
1
is said
to dominate x
2
,ifi ∈{1, 2,...,M}, f
i
(x
1
) f
i
(x
2
) and j
{1, 2,...,M}, f
j
(x
1
)<f
j
(x
2
). A solution not dominated by any
other solution is called a Pareto optimal solution. All Pareto
optimal solutions form the Pareto optimal set (PS) of which
the mapping in objective space is called Pareto front (PF).
Based on the definition of conventional optimization, an
MTO problem can be formulated as
{
X
1
, X
2
,...,X
K
}
=
{
argmin F
1
(X
1
), argmin F
2
(X
2
),...,argmin F
K
(X
K
)
}
(2)
where F
i
denotes the ith optimization task defined in (1), X
i
indicates the solution set of task i (for SOO tasks, the solution
set might contain only one solution), and K is the number of
tasks. An MTO problem is also called an MaTO problem if
K > 3. To solve MTO/MaTO problems with EAs, some new
properties should be defined, i.e., as follows.
1) Factorial Rank: The factorial rank of a solution individ-
ual p on task i is the rank of p in all solutions in terms
of F
i
(in MOO problems, the sorting can be achieved
via nondominate sorting such as NSGA-II [30]).
2) Skill Factor: The skill factor of a solution individual p
isthetaskonwhichp obtains the best factorial rank.
3) Scalar Fitness: Given the factorial rank ϕ of a solution
individual p on its skill factor task, the scalar fitness of
p is defined as 1.
Authorized licensed use limited to: ZTE Corporation (One Dept). Downloaded on June 08,2025 at 03:36:00 UTC from IEEE Xplore. Restrictions apply.
of 15
免费下载
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文档的来源(墨天轮),文档链接,文档作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

关注
最新上传
暂无内容,敬请期待...
下载排行榜
Top250 周榜 月榜