Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • MI X Y x X y Y prob x y

    2020-08-18

    MI (X, Y ) = x X y Y prob (x, y) log [prob (x) prob (y)] (1)
    However, with big data in consideration for lung cancer analy-sis, maximum relevance, and minimum redundancy is measured. Maximum relevance ‘Rel’ consists to search attributes with higher relevancy factor and is formulated as follows: 1
    With reference to Eq. (2), the maximum relevance between attributes ‘Attr’ in class ‘C’ is obtained according to the mutual information factor ‘MI’, while to select the attributes based on the maximum relevance criterion results in larger amount of redun-dancy. To minimize it, the minimum redundancy ‘Red’ criterion is used and is formulated as follows: 1 [Atti, Attj]
    From Eq. (3), the minimum redundant attributes ‘Att’ is ob-tained between the set of attributes ‘Atti’ and ‘Attj’, respectively. From Eqs. (2) and (3), the DL-Homocysteine and optimization of both maximum relevancy ‘Rel’ and the minimum redundancy ‘Red’ results in maximum relevance minimum redundancy called as DL-Homocysteine ‘RelRed’. The maximum relevance minimum redundancy is calcu- lated as follows:
    Att Att
    Followed by maximum relevance minimum redundancy at-tributes obtained for LCD diagnosis with an objective to minimize the time consumed, in this work, a Newton–Raphsons’s Maxi-mum Likelihood function is used to the resultant attributes. The log-likelihood function for Eq. (4) is formulated as follows:
    In the log-likelihood function the first derivative and second derivative are formulated as follows:
    MLMR preprocessing model is employed to find the most relevant and least redundant attributes in the set of class. At first, the maximum relevancy is identified between set of attributes and class based on the mutual information. The results often contained most relevance but redundant. In order to solve this issue, minimum redundancy between attributes is measured in MLMR preprocessing model. These two conditions are equally important and these are combined into a single criterion function in MLMR. In WONN-MLB method, additive combination is used 
    to integrate the maximum relevancy and minimum redundancy. Lastly, maximization is performed on resultant attributes using Newton–Raphsons’s Maximum Likelihood function thus mini-mizes the time required to diagnosis the lung cancer. Fig. 2 shows the flow diagram of proposed MLMR preprocessing model.
    As shown in Fig. 2, let us assume a standard feature selection problem by means of instance ‘eis = (e1s, e2s, . . . , ens, eCs)’, where ‘eis’ represents the ‘ith’ attribute value of the ‘sth’ sample and ‘eCs’ represents the value of the output class ‘C’. Moreover, let us assume a training dataset ‘D’ with ‘m’ examples consists of a set ‘Attr’ with ‘n’ attributes. The main objective of MLMR preprocessing model is to identify the maximum dependency between a set of attributes ‘Attr’ and the class ‘C’, using mutual in-formation, denoted by ‘MI’. The value of ‘MI’ is obtained using the  The log-likelihood function is used to maximize the maxi-mum relevance and minimum redundant attributes. From that, the most relevant attributes are taken for classification process which effectively reduces the time required to lung cancer disease diagnosis. The pseudo-code of the proposed Maximum Likelihood Minimum Redundant preprocessing is given in algorithm 1.
    The Maximum Likelihood Minimum Redundant Preprocess-ing is described in algorithm 1, where for each training dataset (i.e., big data), all the attributes are not essential. In auxins work, for lung cancer diagnosis with big data as the input dataset, maximum relevance, and the minimum redundant attributes are selected. Then, Newton–Raphson’s Likelihood Estimation is eval-uated with respect to the first and the second derivative with an objective to minimize the time consumed for lung cancer diagnosis.
    Upon successful completion of all of the boosting iterations, final ensemble learning classifier which possesses weighted error that is better than chance, is evaluated by combining all weak classi-fiers with an optimal weight Mana et al. [27]. This is formulated as follows:
    From Eqs. (10) and (11), the low weighted error ‘εi’ is obtained based on the probability of distribution function ‘ProbDi ’ for a linear combination of weighted inputs (i.e., attributes) ‘f (ws)’. Finally, a new component ‘ki’ based on error function is calculated as follows:
    Then, a weak classifier with low weighted error is selected and is formulated as follows:
    3.1.3. Weighted optimized neural network with maximum likelihood boosting