1263 lines
149 KiB
Plaintext
1263 lines
149 KiB
Plaintext
Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Contents lists available at ScienceDirect
|
||
|
||
|
||
Computer Standards & Interfaces
|
||
journal homepage: www.elsevier.com/locate/csi
|
||
|
||
|
||
|
||
|
||
Robust zero-watermarking method for multi-medical images based on
|
||
Chebyshev–Fourier moments and Contourlet-FFT
|
||
Xinhui Lu a , Guangyun Yang a , Yu Lu a , Xiangguang Xiong a,b ,∗
|
||
a
|
||
School of Big Data and Computer Science, Guizhou Normal University, Guiyang 550025, China
|
||
b
|
||
Guizhou Provincial Specialized Key Laboratory of Information Security Technology in Higher Education Institutions, Guiyang, 550025, China
|
||
|
||
|
||
|
||
ARTICLE INFO ABSTRACT
|
||
|
||
Keywords: Classical robust watermarking methods embed secret data into a cover image designed to protect its copyright.
|
||
Zero-watermarking However, they suffer from the problem of balancing imperceptibility and robustness. To address this issue, the
|
||
Lorenz chaotic system impact of conventional attacks on the stability of feature vectors extracted from the cover image is examined.
|
||
Chebyshev–Fourier moments
|
||
Accordingly, we proposed a zero-watermarking method with high attack resistance for multi-medical images
|
||
Contourlet transform
|
||
by employing Contourlet transform (CT), Chebyshev–Fourier moments (CHFMs), and fast Fourier transform
|
||
Fast Fourier transform
|
||
(FFT). First, each medical image is normalized separately, and the normalized images are fused using a dual-
|
||
tree complex wavelet transform-based method. Second, the effective region is extracted and subjected to the
|
||
CT. The CHFMs of the low-frequency sub-bands are calculated, and the FFT is performed on the generated
|
||
amplitude sequence to construct a feature matrix. A feature image is generated by combining the magnitude
|
||
of each feature value with the overall mean. Finally, the copyrighted image is encrypted using the Lorenz
|
||
chaotic system and Fibonacci Q-matrix, after which an exclusive-OR operation is applied between the generated
|
||
feature image and the encrypted copyrighted image to produce a zero-watermarking signal. The results show
|
||
that the proposed method exhibits excellent resistance to attack with a normalized correlation coefficient of
|
||
up to 0.994 between the extracted image and the original copyrighted one. Furthermore, the average anti-
|
||
attack performance of our proposed method is approximately 2% higher compared to similar existing methods,
|
||
indicating that our proposed method is highly resistant to conventional, geometric, and combinatorial attacks.
|
||
|
||
|
||
|
||
1. Introduction
|
||
ensuring the robustness and imperceptibility of traditional embedded
|
||
watermarking techniques, making it valuable for essential applications
|
||
Steganography is a widely used technique for covertly embedding
|
||
in many fields, such as multimedia data management.
|
||
secret data within multimedia covers, aiming to ensure undetectability
|
||
With the different domains used in constructing feature images,
|
||
and robustness. By effectively concealing data presence, it enhances se-
|
||
curity and privacy, with broad applications across various fields [1–4]. there are three categories of zero-watermarking techniques. The first
|
||
Unlike steganography, which has the primary purpose of concealing type comprises spatial-domain-based zero-watermarking methods [10–
|
||
the existence of data, robust digital watermarking techniques [5–8] 15]. Yang et al. [10] suggested a zero-watermarking method that
|
||
aim to confirm copyright ownership by embedding specific secret data uses the center pixels of different channels of the cover image as the
|
||
in the protected object. However, because of the strategy used to center of the circle, whereby the pixels covered by rings with different
|
||
embed the secret data into the cover, increasing the strength of the radii and widths constitute the feature image. After that, the final
|
||
embedding degrades the quality of the cover, thus damaging cover zero-watermarking signal is generated by executing an exclusive-OR
|
||
integrity. To address the limitations of traditional watermarking meth- operation on the encrypted copyrighted and feature images. Chang
|
||
ods, Wen et al. [9] introduced zero-watermarking. Unlike conventional et al. [11] proposed a method using secret sharing exhibiting strong
|
||
approaches, this technique preserves the original image by generating robustness and security. Chang et al. [12] used a Sobel operator to
|
||
authentication data from stable image features rather than altering extract the texture and edge features of a cover image to construct a
|
||
pixel values, ensuring both integrity and copyright protection. As a robust zero-watermarking signal. Zou et al. [13] proposed a similarity
|
||
result, the zero-watermarking technique can effectively balance the retrieval method with good resistance to attack. These methods process
|
||
contradiction between reducing the original cover image’s quality and
|
||
|
||
|
||
∗ Corresponding author at: School of Big Data and Computer Science, Guizhou Normal University, Guiyang 550025, China.
|
||
E-mail address: xxg0851@163.com (X. Xiong).
|
||
|
||
https://doi.org/10.1016/j.csi.2025.104115
|
||
Received 3 April 2025; Received in revised form 12 November 2025; Accepted 8 December 2025
|
||
Available online 8 December 2025
|
||
0920-5489/© 2025 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
the cover image directly in the pixel domain, offering implementation (2) The protection cost of multiple medical images was reduced by
|
||
advantages of simplicity and intuitiveness. a fusion operation using a dual-tree complex wavelet transform-based
|
||
The second type of method is frequency-domain-based [16–22]. method.
|
||
Yang et al. [16] proposed a method that was based on the non- (3) The CT and CHFMs were employed to construct a zero-
|
||
subsampled Shearlet transform and Schur decomposition, which watermarking signal to address the problem that existing methods are
|
||
achieved better anti-attack performance. Huang et al. [17] extracted only resistant to limited attacks.
|
||
low-frequency sub-bands (LSs) using a dual-tree complex wavelet trans- (4) The Lorenz chaotic system and Fibonacci Q-matrix were utilized
|
||
form (DTCWT), partitioned the LSs, and used Hessenberg decomposi- to encrypt the copyrighted image to heighten the proposed method’s
|
||
tion to yield a robust signal. Lu et al. [18] proposed fusing the cover security.
|
||
images with the gray-weighted averaging fusion method, generating The remainder of the paper is organized as follows: Section 2
|
||
a robust zero-watermarking signal using the fast finite Shearlet trans- presents the basic theory, including the Lorenz chaotic system and Fi-
|
||
form and Schur decomposition. Wu et al. [20] presented a robust
|
||
bonacci Q-matrix, image normalization technique, image fusion method
|
||
scheme for constructing a zero-watermarking signal to encrypt medical
|
||
using DTCWT, CHFMs, CT, and FFT. Section 3 analyzes the effect of
|
||
images using the Contourlet transform (CT). These methods generate
|
||
conventional attacks on the stability of feature vectors extracted from
|
||
robust zero-watermarking signals by transforming the cover image
|
||
cover images. Section 4 describes the key steps of the copyrighted im-
|
||
from the spatial to the frequency domain, leveraging frequency-domain
|
||
age encryption, zero-watermarking signal construction, and detection.
|
||
properties that offer enhanced resistance against non-geometric attacks.
|
||
Section 5 presents the attack resistance of our method and evaluates
|
||
Although the above methods are effective against conventional
|
||
its superiority by comparing it with similar ones. The final section
|
||
image processing attacks, they are not against large-scale geometric
|
||
attacks such as rotation, scaling, and cropping, because the features ex- concludes this paper.
|
||
tracted by these methods are not geometrically invariant. Therefore, to
|
||
enhance the resilience of zero-watermarking methods against geometric 2. Basic theory
|
||
attacks, some scholars have proposed the use of continuous orthogonal
|
||
moments that possess stability and geometric invariance [23–27], to 2.1. Lorenz chaotic system
|
||
optimize the construction and verification of zero-watermarking sig-
|
||
nals. This falls into the third category of zero-watermarking methods. The Lorenz chaotic system is a nonlinear dynamic system discovered
|
||
Bessel–Fourier moments (BFMs) [23] are among the most representa- by the American meteorologist Edward Norton Lorenz in 1963 during
|
||
tive continuous orthogonal moments. Their radial polynomials are con- his research on weather changes. The system models atmospheric con-
|
||
sidered to be feature functions with good orthogonality and are widely vective motion using three-dimensional ordinary differential equations,
|
||
used in the field of pattern recognition. Gao et al. [24] proposed a ro- generating high-quality chaotic sequences free from short-cycle ef-
|
||
bust method using BFMs. This method first normalizes the cover image fects. Its unpredictability and randomness make it particularly suitable
|
||
to get translation and scaling invariance. Then it computes the BFMs for image encryption applications. The Lorenz chaotic system [37] is
|
||
of the normalized image to construct a zero-watermarking signal us- represented as follows:
|
||
ing moment–rotation invariance. Subsequently, neural networks were
|
||
introduced into watermarking techniques to comprehensively improve ⎧𝑥̇ = 𝑎(y − 𝑥)
|
||
⎪
|
||
their adaptability and robustness in the face of complex and changing ⎨𝑦̇ = 𝑥(𝑐 − 𝑧) − 𝑦 (1)
|
||
image processing and geometric attacks [28–36]. Gong et al. [28] ⎪
|
||
⎩𝑧̇ = 𝑥𝑦 − 𝑏𝑧
|
||
proposed a robust medical image zero-watermarking method based on
|
||
a residual DenseNet. He et al. [29] proposed a robust image method where 𝑎, 𝑏, and 𝑐 denote the three constants of the Lorenz chaotic
|
||
based on shrinkage and a redundant feature elimination network. Such system, and 𝑥, 𝑦, and 𝑧 represent its three state variables. The Lorenz
|
||
methods provide a higher level of understanding and protection of chaotic system produces a butterfly-shaped chaotic attractor as dis-
|
||
image content using the superb feature extraction capability of Neural played in Fig. 1(a) when 𝑎 = 10, 𝑏 = 83 , 𝑐 = 28, and (𝑥0 , 𝑦0 , 𝑧0 ) =
|
||
networks. However, Neural network-based zero-watermarking methods (0.1, 0.1, 0.1). In Fig. 1(a), the system is bounded, stochastic, and non-
|
||
face multiple challenges, including substantial training data require- periodic. Fig. 1(b) and (c) illustrate the bifurcation and Lyapunov
|
||
ments, high computational complexity, limited interpretability, and exponential plots under variations in parameter 𝑐.
|
||
susceptibility to adversarial attacks.
|
||
All of the above methods satisfy the basic requirements of digital 2.2. Fibonacci Q-matrix
|
||
watermarking technology. However, most of these methods lack a
|
||
strong anti-attack ability to resist diverse attacks, with poor perfor-
|
||
To enhance encryption security and reliability, the researchers uti-
|
||
mance against geometric and combinatorial attacks. Additionally, the
|
||
lize properties of the Fibonacci sequence in their method design, signifi-
|
||
costs of centralized protection and the occupation of storage space for
|
||
cantly improving protection capabilities for enhanced privacy preserva-
|
||
multiple images are relatively high. To address these issues, a zero-
|
||
tion and information security. The recurrence formula for the Fibonacci
|
||
watermarking method that combines CT, Chebyshev–Fourier moments
|
||
sequence [38] is calculated as follows:
|
||
(CHFMs), and fast Fourier transform (FFT) is proposed. This approach
|
||
leverages the directional selectivity and sparsity of CT, the orthogonal- 𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 , 𝑛 > 2 (2)
|
||
ity and rotational invariance properties of CHFMs, and the computa-
|
||
tionally efficient and numerically stable properties of FFT. Compared where 𝐹1 = 𝐹2 = 1, 𝐹𝑛 denotes the 𝑛th Fibonacci number.
|
||
with zero-watermarking methods that only use frequency domain or The Fibonacci Q-matrix is constructed using Fibonacci numbers. It
|
||
orthogonal moments, this method enhances robustness against geomet- is usually represented as a 2 × 2 matrix as follows:
|
||
ric and combinatorial attacks by combining CT and CHFMs, which [ ]
|
||
1 1
|
||
fully utilize the multi-scale features of CT and the geometric invari- 𝑄= (3)
|
||
1 0
|
||
ance of CHFMs. Additionally, the method adopts DTCWT-based fusion
|
||
for efficient multi-image protection and storage reduction. The main The corresponding inverse matrix 𝑄−1 of the Q-matrix is defined as:
|
||
contributions are as follows:
|
||
(1) The effects of conventional attacks on the stability of feature [ ]
|
||
1 −1
|
||
vectors extracted from cover images were analyzed. The results indicate 𝑄−1 = (4)
|
||
−1 0
|
||
that the extracted feature vectors are highly resistant to attacks.
|
||
|
||
2
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 1. Chaotic attractor diagram, bifurcation diagram, and Lyapunov exponential spectrum of the Lorenz system.
|
||
|
||
|
||
|
||
|
||
Fig. 2. Experiment results of image normalization.
|
||
|
||
|
||
The 𝑛th power of the Q-matrix is defined as follows: 2.4. Image fusion using DTCWT
|
||
[ ]
|
||
𝐹 𝐹𝑛
|
||
𝑄𝑛 = 𝑛+1 (5) The dual-tree complex wavelet transform (DTCWT) is a technique
|
||
𝐹𝑛 𝐹𝑛−1
|
||
that combines the multi-scale analysis capabilities of the discrete
|
||
The determinants of the Q-matrix can be expressed as: wavelet transform with high computational efficiency. It utilizes a tree
|
||
structure with low-pass and high-pass filter banks to decompose the
|
||
Det(𝑄𝑛 ) = 𝐹𝑛+1 𝐹𝑛−1 − 𝐹𝑛2 = (−1)𝑛 (6)
|
||
real and imaginary parts of the image into multiple scales. At each
|
||
scale, the DTCWT generates a low-frequency component and six detail
|
||
The corresponding inverse matrix 𝑄−𝑛 of the 𝑄𝑛 is given below:
|
||
[ ] components with different orientations (±15◦ , ±45◦ , ±75◦ ). Recently,
|
||
𝐹 −𝐹𝑛 DTCWT has been widely adopted in image fusion [39]. The DTCWT
|
||
𝑄−𝑛 = 𝑛−1 (7)
|
||
−𝐹𝑛 𝐹𝑛+1 efficiently extracts multi-scale image details, producing fused images
|
||
with richer content and improved visual quality.
|
||
2.3. Image normalization (1) DTCWT of the image. Apply DTCWT to the original images for
|
||
1-level decomposition to obtain the low-frequency coefficients (𝐿𝐿1 ,
|
||
Normalization is a critical step in image processing and computer 𝐿𝐿2 , . . . , 𝐿𝐿𝑘 ) and high-frequency coefficients (𝐻𝐿1 , 𝐿𝐻 2 , . . . , 𝐻𝐻 𝑘 ,)
|
||
vision [39]. Generally, the cover image after the normalization oper- with the following equations:
|
||
ation is transformed into a standard form that can resist attacks from [ ] ( )
|
||
affine transformations, such as translation, rotation, and scaling. The 𝐿𝐿𝑘 , 𝐻𝐿𝑘 , 𝐿𝐻 𝑘 , 𝐻𝐻 𝑘 = DTCWT 𝐼𝑘 (11)
|
||
two-dimensional (𝑝 + 𝑞)-order moment of the cover image 𝑓 (𝑥, 𝑦) is where 𝑘 = 1, 2, . . . , 𝑛.
|
||
defined as: (2) Fusion of high-frequency coefficients. Calculate the energy of
|
||
∑∑ each coefficient and its neighboring region in the high-frequency sub-
|
||
𝑚𝑝𝑞 = 𝑥𝑝 𝑦𝑞 𝑓 (𝑥, 𝑦) (8)
|
||
𝑥 𝑦 bands of all images. The window size is set to 2r + 1, where 𝑟 is
|
||
the window radius. Within this window, each coefficient is given a
|
||
where 𝑝, 𝑞 = 0, 1, 2, 3..., and the image central moments are defined 1
|
||
weight of (2𝑟+1) 2 . The local energy 𝐸(𝑥, 𝑦) of image 𝑘 at position (𝑥, 𝑦)
|
||
as
|
||
∑∑ is calculated as:
|
||
𝑝
|
||
𝑢𝑝𝑞 = (𝑥 − 𝑥) ̄ 𝑞 𝑓 (𝑥, 𝑦)
|
||
̄ (𝑦 − 𝑦) (9) ∑
|
||
𝑥+𝑟 ∑
|
||
𝑦+𝑟
|
||
𝑥 𝑦 𝐸𝑘 (𝑥, 𝑦) = (|𝑓𝑘 (𝑚, 𝑛)|2 ) (12)
|
||
𝑚 𝑚 𝑚=𝑥−𝑟 𝑛=𝑦−𝑟
|
||
where (𝑥, ̄ is the center of mass of the image with 𝑥̄ = 𝑚10 and 𝑦̄ = 𝑚01 .
|
||
̄ 𝑦)
|
||
00 [ 00] The fused high-frequency coefficients are selected from the image
|
||
𝑢20 𝑢11
|
||
The covariance matrix M of the cover image is defined as with maximum energy at each position. The relevant formula is shown
|
||
𝑢11 𝑢02
|
||
below.
|
||
. The normalization operation for the image is based on the invariance
|
||
of the matrix as follows: 𝐻𝐹 (𝑥, 𝑦) = arg max 𝐸𝑘 (𝑥, 𝑦) (13)
|
||
𝑘
|
||
[ 𝑚] [ ]⎡ 𝑐 0 ⎤[ 𝑒 ][ ]
|
||
𝑥 cos𝛼 sin𝜕 ⎢ √𝜆1 ⎥ 1𝑥 𝑒1𝑦 𝑥 − 𝑥̄ For positions where multiple images have equal maximum energy, the
|
||
= 𝑐 (10) average of the coefficients of those images is taken.
|
||
𝑦𝑚 −sin𝜕 cos𝛼 ⎢ 0 √ ⎥ −𝑒1𝑦 𝑒1𝑥 𝑦 − 𝑦̄
|
||
⎣ 𝜆2 ⎦ (3) Fusion of low-frequency coefficients. The maximum coefficients
|
||
where 𝜆1 and 𝜆2 are the eigenvalues of M, and the corresponding across all images in the LSs are selected.
|
||
[ ]𝑇 [ ]𝑇
|
||
eigenvectors are 𝑒1𝑥 , 𝑒1𝑦 and 𝑒2𝑥 , 𝑒2𝑦 , respectively.
|
||
𝐿𝐹 (𝑖, 𝑗) = max(𝐿𝐿1 (𝑖, 𝑗), 𝐿𝐿2 (𝑖, 𝑗), … , 𝐿𝐿𝑘 (𝑖, 𝑗)) (14)
|
||
The images before and after the normalization process are shown
|
||
in Fig. 2 for four standard medical images, where the original images (4) Image reconstruction. With the fused high-frequency details
|
||
are shown in (a)–(d), and the corresponding normalized versions are and low-frequency coefficients, the reconstructed image is obtained by
|
||
shown in (e)–(h). applying the inverse DTCWT to the fused data.
|
||
|
||
3
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
The functions 𝑃𝑛𝑚 (𝑟, 𝜃) are orthogonal within the unit circle, where
|
||
(0 ≤ 𝑟 ≤ 1, 0 ≤ 𝜃 ≤ 2𝜋).
|
||
2𝜋 1
|
||
𝑃𝑛𝑚 (𝑟, 𝜃)𝑃𝑘𝑙 (𝑟, 𝜃)𝑟𝑑𝑟𝑑𝜃 = 𝛿𝑛𝑚𝑘𝑙 (18)
|
||
∫0 ∫0
|
||
where 𝛿𝑛𝑚𝑘𝑙 is the Kronecker delta, the image function 𝑓 (𝑟, 𝜃) can be
|
||
decomposed orthogonally in the polar coordinate system by the func-
|
||
tional system 𝑃𝑛𝑚 (𝑟, 𝜃). Reconstruction using the CHFMs is thus made
|
||
possible, and the image reconstruction function 𝑓 (𝑟, 𝜃) can subsequently
|
||
Fig. 3. Experimental results of image fusion using DTCWT.
|
||
be written as:
|
||
∑
|
||
∞ ∑
|
||
+∞
|
||
𝑓 (𝑟, 𝜃) = 𝜙𝑛𝑚 𝑅𝑛 (𝑟) exp(𝑗𝑚𝜃) (19)
|
||
𝑛=0 𝑚=−∞
|
||
|
||
where 𝜙𝑛𝑚 is the CHFM for image 𝑓 (𝑟, 𝜃).
|
||
2𝜋 1
|
||
𝜙𝑛𝑚 = 𝑓 (𝑟, 𝜃)𝑅𝑛 (𝑟) exp(−𝑗𝑚𝜃)𝑟𝑑𝑟𝑑𝜃 (20)
|
||
∫0 ∫0
|
||
|
||
2.7. Fast Fourier transform
|
||
|
||
The FFT is a fast algorithm based on the discrete Fourier transform
|
||
(DFT) that leverages the inherent properties of the DFT, including sym-
|
||
metry, periodicity, and the relationship between odd and even terms.
|
||
It works by using its intrinsic periodicity and symmetry to decompose
|
||
a long sequence of DFTs into the sum of many short sequences of
|
||
Fig. 4. Schematic of the CT.
|
||
DFTs [43]. The FFT can be represented mathematically as follows:
|
||
∑
|
||
𝑁−1
|
||
𝑛
|
||
𝑥𝑘 = 𝑥𝑛 ∙ 𝑒−𝑖2𝜋𝑘 𝑁 (21)
|
||
The normalized images from Fig. 2 were fused using the afore- 𝑛=0
|
||
mentioned image fusion technique. Fig. 3 presents the fusion results,
|
||
where k = 0, 1, 2, . . . ..., N -1.
|
||
where Fig. 3(a) shows the fusion of images Fig. 2(f) and (g); Fig. 3(b)
|
||
Computing the DFT of a discrete signal using Eq. (21) requires
|
||
displays the fusion incorporating images Fig. 2(e)–(g); and Fig. 3(c)
|
||
𝑁 × 𝑁 steps, whereas the FFT computes the DFT of a discrete signal by
|
||
demonstrates the fusion combining images Fig. 2(e)–(h).
|
||
dividing the DFT equation into two independent components, as shown
|
||
in Eq. (22).
|
||
2.5. Contourlet transform
|
||
(𝑁 )−1 (𝑁 )−1
|
||
∑
|
||
2 𝑚
|
||
−𝑖2𝜋𝑘 (𝑁∕2) 1
|
||
−𝑖2𝜋𝑘 𝑁
|
||
∑
|
||
2 𝑚
|
||
−𝑖2𝜋𝑘 (𝑁∕2)
|
||
𝑥𝑘 = 𝑥2𝑚 ∙ 𝑒 +𝑒 𝑥2𝑚+1 ∙ 𝑒 (22)
|
||
The Contourlet transform (CT) [40] is a dual-filter structure that is
|
||
𝑚=0 𝑚=0
|
||
effective in obtaining sparse extensions of typical images with smooth
|
||
∑𝑁∕2−1 𝑚
|
||
−𝑖2𝜋𝑘 𝑁∕2
|
||
contours due to its unique multi-resolution and multidirectional capa- where 𝑚=0
|
||
𝑥2𝑚 ⋅ 𝑒 represents the even-indexed DFT and
|
||
1 ∑𝑁∕2−1 𝑚
|
||
−𝑖2𝜋𝑘 𝑁∕2
|
||
bility. The Laplace Pyramid is utilized to capture point discontinuities 𝑒 −𝑖2𝜋𝑘 𝑁
|
||
𝑥2𝑚+1 ⋅ 𝑒 means the odd-indexed DFT.
|
||
𝑚=0
|
||
in the image, while a bank of directional filters connects these discon-
|
||
tinuities into a linear structure. Basic elements such as contour lines 3. Effect of the attacks on the stability of extracted feature vectors
|
||
are used for image expansion, which facilitates the reconstruction of
|
||
complex image features. Fig. 4 shows a schematic of the decomposition The performance of zero-watermarking methods against attacks
|
||
of a 512 × 512 image using CT. mainly depends on whether the essential features extracted when con-
|
||
structing a zero-watermarking signal exhibit strong robustness against
|
||
2.6. Chebyshev-Fourier moments attacks. In this study, we first normalized and fused multiple images.
|
||
Then, we extracted the effective regions of the fused images and
|
||
The Chebyshev–Fourier moments (CHFMs) were proposed by Ping performed CT and CHFMs to generate the magnitude sequence. Finally,
|
||
et al. [41] in 2002 and entail the following key steps: an FFT was performed on the generated magnitude sequence to obtain
|
||
In polar coordinates (𝑟, 𝜃), the Chebyshev–Fourier function 𝑃𝑛𝑚 (𝑟, 𝜃) 64-bit feature vectors. To validate the ability of the proposed method
|
||
consists of two components: the radial function 𝑅𝑛 (𝑟) and the angular to resist attacks, the following two experiments were conducted:
|
||
function exp(𝑗𝑚𝜃). (1) The stability of the extracted feature vectors of the cover image
|
||
against various attacks was verified on the Chest X-ray image shown
|
||
𝑃𝑛𝑚 (𝑟, 𝜃) = 𝑅𝑛 (𝑟) exp(𝑗𝑚𝜃) (15) in Fig. 5. Table 1 shows the corresponding results. As observed, the
|
||
extracted feature vectors (64 bits) under different attacks are almost
|
||
where unchanged, and the correlation coefficients are all higher than 0.984,
|
||
√ 𝑛+2
|
||
indicating that the extracted feature vectors exhibit strong robustness
|
||
8 1−𝑟 4 ∑
|
||
1
|
||
2
|
||
(𝑛 − 𝑘)!
|
||
𝑅𝑛 (𝑟) = ( ) (−1)𝑘 [2(2𝑟 − 1)]𝑛−2𝑘 (16) in the face of various attacks.
|
||
𝜋 𝑟 𝑘!(𝑛 − 2𝑘)!
|
||
𝑘=0 (2) The uniqueness of the feature vectors generated from the fused
|
||
In 2007, Ping et al. [42] showed that CHFMs are deformations of the images was verified on the feature vectors extracted from the images
|
||
Jacobi–Fourier moments (𝑝 = 2, q = 3/2), and thus the radial function in Fig. 5 after fusion. The experimental results are shown in Tables 2
|
||
𝑅𝑛 (𝑟) of CHFMs can be expressed as and 3, where 𝑃1 , 𝑃2 , 𝑃3 , and 𝑃4 denote the Heart, Chest X-ray, Brain,
|
||
√ and Knee images, respectively. The results show that the extracted
|
||
8 1−𝑟 4 ∑
|
||
1 𝑛
|
||
(𝑛 + 𝑘 + 1)!22𝑘 𝑠 feature vectors from different fused images differ, with a similarity
|
||
𝑅𝑛 (𝑟) = ( ) (−1)𝑘 𝑟 (17)
|
||
𝜋 𝑟 𝑘=0
|
||
(𝑛 − 𝑘)!(2𝑘 + 1)! of approximately 0.5. In contrast, the feature vectors from the same
|
||
|
||
4
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 5. Four original medical images and their fusion.
|
||
|
||
|
||
Table 1
|
||
Feature vectors generated under different attacks (64-bit).
|
||
Type of attack Generated feature vectors NC
|
||
No attacks 1111111100111111000000000000001100000000000000010111111111111111 –
|
||
JPEG compression (QF = 15) 1111111100111111000000000000001100000000000000011111111111111111 0.998
|
||
Median filtering (3 × 3) 1111111100111111000000000000001100000000000000000111111111111111 0.999
|
||
Wiener filtering (3 × 3) 1111111100111111000000000000001100000000000000010111111111111111 1.000
|
||
Gaussian noise (0.1) 1111111101111111000000000000001100000000000000011111111111111111 0.991
|
||
Salt & pepper noise (0.1) 1111111101111111000000000000001100000000000000011111111111111111 0.993
|
||
Rotation attack (10◦ ) 1111111101101111000000000000001100000000000000010111111111111111 0.984
|
||
Scaling attack (Shrink 0.25x) 1111111100111111000000000000001100000000000000010111111111111111 1.000
|
||
Cropping attack (Upper left 1/16) 1111111110011111000000000000001100000000000000010111111111111111 0.992
|
||
|
||
|
||
Table 2
|
||
Feature vectors generated by different images fusion (64-bit).
|
||
Fusion of different images Generated feature vectors
|
||
𝑃1 , 𝑃 2 , 𝑃 3 00000011100001111111011111101111111001111100011111000000000001111
|
||
𝑃1 , 𝑃 2 , 𝑃 4 01100011111111111111111111111111111111111111111111000000000000011
|
||
𝑃2 , 𝑃 3 , 𝑃 4 00001110111100111000011111100011100011111100000000000000000000000
|
||
𝑃1 , 𝑃 3 , 𝑃 4 00011111111111111111011111111111111111111111111111000000000000111
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 00010000000000111011111000111111111111111111000111111111111000111
|
||
|
||
|
||
Table 3 protection of multiple images, a robust zero-watermarking method
|
||
Similarity of feature vectors generated from different images fusion. combining image moments and multi-scale transformation is proposed.
|
||
Fusion of 𝑃1 , 𝑃2 , 𝑃3 𝑃1 , 𝑃2 , 𝑃4 𝑃2 , 𝑃 3 , 𝑃 4 𝑃1 , 𝑃 3 , 𝑃 4 𝑃1 , 𝑃2 , 𝑃3 , 𝑃4 Figs. 6–8 show the flowcharts for the copyrighted image encryption and
|
||
different images decryption, zero-watermarking construction, and detection algorithms,
|
||
𝑃1 , 𝑃 2 , 𝑃 3 1.000 0.574 0.554 0.546 0.528 respectively.
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.576 1.000 0.501 0.512 0.563
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.552 0.501 1.000 0.581 0.515
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.546 0.512 0.591 1.000 0.530
|
||
4.1. Copyrighted image encryption
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.528 0.563 0.515 0.530 1.000
|
||
To enhance the security of the method, a copyrighted image CI of
|
||
size 𝑚 × 𝑛 was encrypted using the Lorenz chaotic system and Fibonacci
|
||
Q-matrix. Fig. 6 shows the experimental results after encrypting the
|
||
fused images are identical, with a similarity of 1.000. This indicates copyrighted image using the following key steps:
|
||
that the extracted feature vectors can effectively distinguish the fusion Step 1: Using the original copyrighted image CI of size 𝑚 × 𝑛, the
|
||
of different images. initial key 𝑥1 of the Lorenz chaotic system is computed.
|
||
The experimental results demonstrate that the constructed feature ∑𝑚 ∑𝑛
|
||
signal exhibits robust performance, providing a theoretical basis for 𝑖=1 𝑗=1 𝐶𝐼(𝑖, 𝑗) + (𝑚 × 𝑛)
|
||
𝑥1 = (23)
|
||
utilizing the feature signal to generate a robust zero-watermarking 2000 + (𝑚 × 𝑛)
|
||
signal. Two new values, 𝑥2 and 𝑥3 , are then obtained by iterating twice.
|
||
Finally, 𝑥1 , 𝑥2 , and 𝑥3 are chosen as the initial values of the state
|
||
4. Proposed method variables x, y, and z, respectively.
|
||
Step 2: Based on the selected initial values, three vectors, X, Y and
|
||
To address the poor performance of most methods in resisting Z are generated using Eq. (1), from which three sub-vectors of length
|
||
diversity attacks and the high storage space required for centralized 𝑚 × 3𝑛 are chosen to construct a vector V of length 𝑚 × 𝑛.
|
||
|
||
5
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 6. An example of simple copyrighted image encryption and decryption.
|
||
|
||
|
||
Step 3: The copyrighted image CI is first reshaped into a one- method is used to construct the binary feature image 𝐹 = {𝑓 (𝑖, 𝑗), 1 ≤
|
||
dimensional vector G, and then, the sequence V is sorted in ascending 𝑖 ≤ 𝑚, 1 ≤ 𝑗 ≤ 𝑛}.
|
||
order to obtain index IX. Finally, G is permuted using IX to generate a {
|
||
1, 𝐶(𝑖, 𝑗) ≥ 𝑀
|
||
scrambled one-dimensional vector R. 𝐹 (𝑖, 𝑗) = (25)
|
||
0, 𝐶(𝑖, 𝑗) < 𝑀
|
||
Step 4: The R vector is reshaped into a matrix of size 𝑚 × 𝑛, and the
|
||
matrix is partitioned into blocks of size 2 × 2. Step 9: Perform an XOR operation between the feature matrix
|
||
Step 5: Set the parameter 𝑛 = 20 in Eq. (5) to compute 𝑄𝑛 . Then, F obtained in Step 8 and the encrypted copyrighted image ECI in
|
||
perform a modulo-2 operation on each term in 𝑄𝑛 to obtain a binary Section 4.1 to get a robust zero-watermarking image, which is then
|
||
matrix. authenticated and registered with a third-party intellectual property
|
||
Step 6: Using the Fibonacci Q-matrix construction method intro- rights (IPR). The unique ID number is then saved as the basis for
|
||
duced in Section 2.2, an exclusive-OR operation is performed between copyright extraction. The zero-watermarking image construction and
|
||
each block of size 2 × 2 and the Fibonacci Q-matrix to obtain an registration processes are thus completed.
|
||
encrypted copyrighted image (ECI ). 𝑍 = XOR (𝐸𝐶𝐼, 𝐹 ) (26)
|
||
The image decryption step is simply the reverse of the encryption
|
||
step and is not described here.
|
||
4.3. Zero-watermarking detection
|
||
4.2. Zero-watermarking construction
|
||
The zero-watermarking detection process is the reverse of the zero-
|
||
watermarking construction method. Below is a description of the key
|
||
Assuming that the sizes of the four cover images I and the copy-
|
||
steps.
|
||
righted image CI are 𝑀 × 𝑁 and 𝑚 × 𝑛, respectively. A robust feature
|
||
Step 1: Same as Step 1 of the zero-watermarking signal generation
|
||
image is constructed by combining image moments and multi-scale
|
||
process, four gray-scale images of size 𝑀 × 𝑁 are normalized using
|
||
transforms, and a robust zero-watermarking signal is generated by
|
||
the method described in Section 2.3, followed by scaling and rotation
|
||
performing an exclusive-OR (XOR) operation with the encrypted copy-
|
||
normalizations to produce standard normalized images.
|
||
righted image. The key steps of the proposed method are outlined as
|
||
Step 2: The corresponding feature image is obtained by performing
|
||
follows.
|
||
the normalized standard images following Steps 2–8 in Section 4.2.
|
||
Step 1: Using the moment-based image normalization technique in Step 3: A zero-watermarking image saved by a third-party au-
|
||
Section 2.3, four gray-scale images of size 𝑀 × 𝑁 are subjected to thentication center can be obtained using the ID number. Then, an
|
||
the corresponding normalization process. Then, scaling and rotation XOR operation is performed on the zero-watermarking image and the
|
||
normalizations are applied to obtain four standard normalized images. generated feature image, resulting in an undecrypted copyright image
|
||
Step 2: A new fused image (FI ) is generated by fusing the informa- (UCI ).
|
||
tion of the four normalized images using the image fusion method in ( )
|
||
Section 2.4. 𝑈 𝐶𝐼 = XOR 𝑍, 𝐹 ′ (27)
|
||
Step 3: For a fused image FI of size 𝑀 × 𝑁, the geometric center of Step 4: The original copyrighted image CI can be recovered by
|
||
FI is defined as 𝑥 = 𝑀2
|
||
, 𝑦 = 𝑁2 . The effective region (ER) of size 𝑃 × 𝑄 decrypting the undecrypted copyrighted image UCI using the Lorenz
|
||
is extracted from the fused image FI using Eq. (24). chaotic system and the Fibonacci Q-matrix. Because the original CI
|
||
[ ]
|
||
𝑃 𝑃 𝑄 𝑄 is a meaningful and recognizable image, the human eye can directly
|
||
𝐸𝑅 = FI (𝑥 − ) ∶ (𝑥 + − 1), (𝑦 − ) ∶ (𝑦 + − 1) (24) authenticate the recovered copyrighted image.
|
||
2 2 2 2
|
||
|
||
Step 4: Using the Contourlet transform, the LSs are obtained from 5. Experimental results and analysis
|
||
the extracted ER. A square region (SR) of size ((𝑀 +𝑁)∕2)×((𝑀 +𝑁)∕2)
|
||
is then selected from LSs. 5.1. Experimental parameters
|
||
Step 5: The maximum-order 𝑛max = 25 is selected, and the region
|
||
SR is computed using Eq. (15) to obtain (𝑛max + 1)(2𝑛max − 1) CHFMs. To verify the effectiveness of our method, a simulation experiment
|
||
Step 6: To make the number of CHFMs the same size as the was conducted in two software environments: one configured with
|
||
copyrighted image, 𝑚 × 𝑛 moment values are obtained by expanding MATLAB R2023a and the other with Microsoft Windows 11. Four
|
||
the amplitude sequence of the (𝑛max + 1)(2𝑛max − 1) moments, converting 512 × 512 standard medical images: Heart, Chest X-ray, Brain, and
|
||
them into an 𝑚 × 𝑛 one-dimensional vector 𝐴 = {𝑎(𝑖), 1 ≤ 𝑖 ≤ 𝑚 × 𝑛}. Knee were chosen as experimental images, as shown in Fig. 9(a)∼(d).
|
||
Step 7: FFT is performed on one-dimensional vector A to generate Fig. 9(e) shows the original binary copyrighted image, which is a
|
||
one-dimensional vector 𝐵 = {𝑏(𝑖), 1 ≤ 𝑖 ≤ 𝑚 × 𝑛}. 64 × 64 pixel binary image composed of a binary sequence of length
|
||
Step 8: Reshape the vector B into a two-dimensional matrix C. 4096. Fig. 9(f) displays a zero-watermarking image generated by this
|
||
Calculate the mean value M of the matrix C and binarize it using M as proposed method. As can be seen, the resulting zero-watermarking
|
||
a threshold. Specifically, if the value of an element of C is greater than image looks cluttered and, if not recovered, unrecognizable to the
|
||
or equal to M, the feature bit is 1; otherwise, the feature bit is 0. This human eye.
|
||
|
||
6
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 7. Flowchart of zero-watermarking construction method.
|
||
|
||
|
||
|
||
|
||
Fig. 8. Flowchart of zero-watermarking detection method.
|
||
|
||
|
||
|
||
|
||
Fig. 9. Original medical image, original copyrighted image, and generated zero-watermarking image.
|
||
|
||
|
||
5.2. Evaluation indicators reconstruction error (MSRE) to objectively assess the methods’ perfor-
|
||
mance.
|
||
(1) Normalized correlation
|
||
The attack resistance of the methods is measured using a gen- The NC value is commonly used to measure the similarity between
|
||
eralized normalized correlation coefficient (NC), and the quality of a copyrighted image extracted from an attacked cover image and the
|
||
the reconstructed image is evaluated using a generalized mean-square original copyrighted image. The NC value typically falls between 0 and
|
||
|
||
7
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 10. (a) Variations in the value of 𝑅𝑛 (𝑟) with 𝑟, in the interval 0 < 𝑟 ≤ 1, 𝑛max = 1, 2, 9, 10. (b) MSRE values corresponding to CHFMs with different orders for
|
||
the grayscale image Heart.
|
||
|
||
|
||
1, where 0 indicates that the two images are not similar, and 1 indicates Table 4
|
||
that they are identical. In other words, the higher the NC value, the Results of resisting JPEG compression.
|
||
more similar the two images are, suggesting that the method is more Fusion of Quality factors (QF) Average
|
||
resistant to attacks. different images 5 10 15 20 25
|
||
∑𝑚 ∑𝑛
|
||
𝑖=1 𝑗=1 [𝑂𝐶𝐼(𝑖, 𝑗)𝐸𝐶𝐼 (𝑖, 𝑗)] 𝑃1 0.985 0.990 0.986 0.989 0.996 0.989
|
||
𝑁𝐶 (𝑂𝐶𝐼, 𝐸𝐶𝐼) = √ √ (28) 𝑃2 0.996 0.999 0.998 1.000 0.999 0.998
|
||
∑𝑚 ∑𝑛 2 ∑𝑚 ∑𝑛 2
|
||
𝑗=1 𝑂𝐶𝐼(𝑖, 𝑗) 𝑗=1 𝐸𝐶𝐼(𝑖, 𝑗)
|
||
𝑃3 0.995 0.994 0.997 0.998 0.998 0.996
|
||
𝑖=1 𝑖=1
|
||
𝑃4 0.987 0.996 0.998 1.000 0.999 0.996
|
||
where both OCI and ECI are of size 𝑚 × 𝑛; OCI refers to the original 𝑃1 , 𝑃 2 0.992 0.994 0.998 0.998 0.999 0.996
|
||
copyrighted image, while ECI is the copyrighted image extracted after 𝑃1 , 𝑃 3 0.991 0.991 0.993 0.993 0.993 0.992
|
||
𝑃1 , 𝑃 4 0.989 0.995 0.994 0.994 0.995 0.993
|
||
the cover image has undergone an attack.
|
||
𝑃2 , 𝑃 3 0.994 0.996 0.996 0.998 0.998 0.996
|
||
(2) Mean-squared reconstruction error 𝑃2 , 𝑃 4 0.994 0.997 0.998 0.999 0.999 0.997
|
||
As a generalized tool, the quality of the reconstructed images can be 𝑃3 , 𝑃 4 0.989 0.996 0.998 1.000 0.999 0.996
|
||
objectively assessed using the MSRE in Eq. (29). In general, the smaller 𝑃1 , 𝑃 2 , 𝑃 3 0.991 0.997 0.998 0.998 0.999 0.996
|
||
the MSRE value, the lower the error between the reconstructed and 𝑃1 , 𝑃 2 , 𝑃 4 0.994 0.996 0.998 0.998 0.999 0.997
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.994 0.995 0.998 0.997 0.999 0.996
|
||
original images, indicating better image quality; conversely, a higher
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.987 0.991 0.989 0.990 0.989 0.989
|
||
MSRE value suggests poorer reconstruction quality. 𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.994 0.999 0.999 0.998 0.998 0.997
|
||
+∞ +∞ [ ′ ]2
|
||
∫ ∫ 𝐼 (𝑥, 𝑦) − 𝐼(𝑥, 𝑦) 𝑑𝑥𝑑𝑦
|
||
𝑀𝑆𝑅𝐸 = −∞ −∞ +∞ +∞
|
||
(29)
|
||
∫−∞ ∫−∞ [𝐼 ′ (𝑥, 𝑦)]2 𝑑𝑥𝑑𝑦
|
||
images is 1.000. In addition, the proposed method exhibited strong
|
||
where I and 𝐼 ′ denote the original and reconstructed images, respec- robustness when the cover image was attacked. To perform a systematic
|
||
tively. and robust assessment of the proposed method, images, as well as two-,
|
||
three-, and four-fused images, were tested for their resistance to attacks.
|
||
5.3. Image reconstruction experiments The detailed experiments are described below.
|
||
|
||
Fig. 10(a) shows the variation in the values of the radial polynomial 5.4.1. JPEG compression attack
|
||
function 𝑅𝑛 (𝑟) (Eq. (16)) in the interval [0, 1]. It can be seen that 𝑅𝑛 (𝑟) The ability to resist JPEG compression attacks is summarized in
|
||
has n zeros, which satisfy a uniform distribution in the interval [0, 1], Table 4. It can be seen that the proposed method is more resistant to
|
||
and the values of the function located near the zeros of different orders JPEG compression, with an average NC value of 0.995. This may be
|
||
are almost the same. because the proposed method chooses to compute the CHFMs in the
|
||
To verify the reconstruction ability of the CHFMs, experiments were LSs of the CT transform, where the information is more concentrated,
|
||
conducted by setting the parameters 𝑝 = 2 and q = 1.5 and selecting thereby enhancing its resistance to JPEG compression.
|
||
a standard medical heart image of size 512 × 512. Figs. 10(b) and 11
|
||
show the corresponding MSRE values and reconstructed images for n 5.4.2. Noise attack
|
||
= 0, 5, . . . , 25. As shown in Fig. 10(b), the MSRE is the lowest when Table 5 summarizes the experimental results against Gaussian white
|
||
𝑛𝑚𝑎𝑥 = 25. The best-quality reconstructed image is observed in Fig. 11 noise and salt & pepper noise attacks, and Table 6 lists the results for
|
||
for 𝑛𝑚𝑎𝑥 = 25. Gaussian noise and speckle noise attacks. Note that for the Gaussian
|
||
white noise attack, the values of the parameter intensity are 0, 0.5, and
|
||
5.4. Resistance to regular attack experiments 1. It is observed that our method has high resistance to noise attacks
|
||
with NC values of 0.968, 0.963, 0.958, and 0.989 against Gaussian
|
||
In this section, the NC value is used to quantitatively assess the white noise, salt & pepper noise, Gaussian noise, and speckle noise
|
||
quality of the extracted copyrighted image, which reflects the method’s attacks, respectively. This may be because the technique used in the
|
||
resistance to attacks. The results show that when the cover image is not proposed method has a suppression effect on noise in the transform
|
||
attacked, the NC value between the extracted and original copyrighted domain, which enhances its ability to resist noise attacks.
|
||
|
||
8
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
|
||
|
||
Fig. 11. Samples of CHFMs reconstructed with different orders.
|
||
|
||
|
||
Table 5
|
||
Results of resisting Gaussian white noise and salt & pepper noise attacks.
|
||
Fusion of Gaussian white noise Average Salt & pepper noise Average
|
||
different images (0.05, (0.1, (0.2, (0.1, (0.5, 0.1 0.2 0.3 0.4 0.5
|
||
0.025,0) 0.05,0) 0.1,0) 0.1,0.1) 0.25,0)
|
||
𝑃1 0.936 0.914 0.904 0.915 0.900 0.914 0.951 0.922 0.903 0.894 0.896 0.913
|
||
𝑃2 0.996 0.993 0.990 0.990 0.979 0.990 0.993 0.990 0.985 0.979 0.974 0.984
|
||
𝑃3 0.964 0.956 0.944 0.958 0.931 0.950 0.978 0.958 0.951 0.941 0.937 0.953
|
||
𝑃4 0.976 0.959 0.943 0.960 0.930 0.954 0.982 0.966 0.949 0.938 0.932 0.954
|
||
𝑃1 , 𝑃 2 0.992 0.988 0.979 0.981 0.962 0.981 0.984 0.980 0.971 0.959 0.950 0.969
|
||
𝑃1 , 𝑃 3 0.978 0.963 0.956 0.963 0.937 0.959 0.978 0.968 0.950 0.940 0.925 0.952
|
||
𝑃1 , 𝑃 4 0.979 0.973 0.959 0.970 0.948 0.966 0.980 0.971 0.953 0.950 0.935 0.958
|
||
𝑃2 , 𝑃 3 0.994 0.991 0.985 0.986 0.976 0.986 0.990 0.984 0.980 0.979 0.973 0.981
|
||
𝑃2 , 𝑃 4 0.994 0.988 0.979 0.985 0.966 0.982 0.991 0.983 0.976 0.971 0.958 0.976
|
||
𝑃3 , 𝑃 4 0.984 0.970 0.963 0.974 0.948 0.968 0.986 0.974 0.964 0.956 0.952 0.966
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.991 0.980 0.974 0.976 0.957 0.976 0.987 0.971 0.966 0.959 0.953 0.967
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.988 0.979 0.971 0.977 0.963 0.976 0.986 0.978 0.968 0.963 0.956 0.970
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.987 0.982 0.974 0.980 0.961 0.977 0.986 0.981 0.973 0.961 0.953 0.971
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.978 0.971 0.961 0.967 0.947 0.965 0.978 0.969 0.957 0.949 0.936 0.958
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.990 0.984 0.976 0.972 0.965 0.977 0.997 0.994 0.989 0.990 0.987 0.991
|
||
|
||
|
||
Table 6
|
||
Results of resisting Gaussian noise and speckle noise attacks.
|
||
Fusion of Gaussian noise Average Speckle noise Average
|
||
different images 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
|
||
𝑃1 0.926 0.903 0.894 0.898 0.894 0.903 0.993 0.987 0.983 0.983 0.977 0.985
|
||
𝑃2 0.991 0.986 0.977 0.972 0.960 0.977 0.996 0.995 0.993 0.992 0.992 0.993
|
||
𝑃3 0.965 0.954 0.947 0.939 0.932 0.947 0.997 0.996 0.994 0.994 0.992 0.995
|
||
𝑃4 0.971 0.952 0.939 0.934 0.930 0.945 0.997 0.997 0.993 0.991 0.990 0.993
|
||
𝑃1 , 𝑃 2 0.984 0.971 0.959 0.950 0.944 0.962 0.991 0.988 0.989 0.979 0.978 0.985
|
||
𝑃1 , 𝑃 3 0.970 0.957 0.941 0.931 0.919 0.944 0.992 0.986 0.987 0.980 0.981 0.986
|
||
𝑃1 , 𝑃 4 0.977 0.963 0.949 0.938 0.933 0.952 0.988 0.988 0.987 0.980 0.980 0.985
|
||
𝑃2 , 𝑃 3 0.989 0.984 0.980 0.977 0.974 0.981 0.996 0.993 0.992 0.991 0.989 0.992
|
||
𝑃2 , 𝑃 4 0.987 0.979 0.970 0.962 0.957 0.971 0.995 0.994 0.986 0.987 0.987 0.990
|
||
𝑃3 , 𝑃 4 0.976 0.970 0.960 0.956 0.950 0.963 0.997 0.995 0.989 0.989 0.990 0.992
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.983 0.967 0.961 0.950 0.942 0.961 0.992 0.988 0.985 0.984 0.979 0.986
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.979 0.970 0.964 0.957 0.951 0.964 0.989 0.992 0.989 0.983 0.984 0.988
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.982 0.975 0.966 0.959 0.956 0.968 0.994 0.996 0.985 0.990 0.986 0.990
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.973 0.960 0.952 0.940 0.935 0.952 0.990 0.991 0.985 0.983 0.986 0.987
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.990 0.984 0.976 0.972 0.965 0.977 0.997 0.994 0.989 0.990 0.987 0.991
|
||
|
||
|
||
|
||
5.4.3. Filtering attack offset rank attacks and cropping attacks, with an average NC of 0.989
|
||
Table 7 lists the experimental results for Median and Wiener fil- against offset rank attacks and 0.965 against cropping attacks. This
|
||
tering attacks, and Table 8 gives the experimental results for Gaussian finding can be attributed to two key reasons. First, the orthogonality
|
||
low-pass and mean filtering attacks. As observed, our method has high of the Chebyshev polynomials is independent of each other within
|
||
resistance to filtering attacks with NC values of 0.994, 0.997, 1.000, a specific interval, which helps reduce interference between different
|
||
and 0.994 against median filtering, Wiener filtering, Gaussian low-pass frequency components, thereby improving the stability and robustness
|
||
filtering, and mean filtering attacks, respectively. This may be because of the signal. Second, the FFT converts the amplitude signal from the
|
||
the CT transform used in the proposed method provides a nuanced and time domain to the frequency domain, resulting in the loss of key
|
||
compelling characterization of the local and global features of the cover information when the original signal is affected by an offset or crop-
|
||
image in the transform domain, which makes the cover image highly ping attack in the time domain. In contrast, the information remains
|
||
stable when subjected to a filtering attack, effectively enhancing its relatively intact in the frequency domain. Consequently, the proposed
|
||
ability to resist the filtering attack. method can effectively resist offset-rank and cropping attacks.
|
||
|
||
5.5. Resistance to geometric attack experiments 5.5.2. Scaling attack
|
||
The results of the scaling attack are summarized in Table 10. Note
|
||
5.5.1. Offset rows, columns, and cropping attacks that after scaling the image by a factor of 𝑥, it needs to be scaled
|
||
Table 9 provides experimental results for offset rows, columns, and again by a factor of 𝑥1 before constructing the feature image. As seen,
|
||
cropping attacks. The proposed method exhibits robustness against the proposed method demonstrated outstanding resistance to scaling
|
||
|
||
9
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 7
|
||
Results of resisting median and Wiener filtering attacks.
|
||
Fusion of Median filtering Average Wiener filtering Average
|
||
different images 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11
|
||
𝑃1 0.990 0.973 0.968 0.966 0.965 0.972 0.998 0.998 0.998 0.998 0.998 0.998
|
||
𝑃2 0.999 0.999 0.999 0.999 0.998 0.999 1.000 1.000 1.000 1.000 1.000 1.000
|
||
𝑃3 1.000 0.997 0.995 0.994 0.992 0.995 1.000 0.999 0.998 0.997 0.997 0.998
|
||
𝑃4 1.000 0.999 0.997 0.996 0.995 0.997 1.000 1.000 1.000 1.000 0.999 1.000
|
||
𝑃1 , 𝑃 2 0.998 0.996 0.996 0.995 0.993 0.996 0.999 0.998 0.997 0.997 0.995 0.997
|
||
𝑃1 , 𝑃 3 1.000 0.997 0.994 0.992 0.988 0.994 1.000 0.999 0.998 0.994 0.992 0.996
|
||
𝑃1 , 𝑃 4 0.999 0.996 0.994 0.990 0.986 0.993 1.000 0.998 0.995 0.991 0.989 0.994
|
||
𝑃2 , 𝑃 3 0.999 0.999 0.998 0.995 0.993 0.997 1.000 0.999 0.998 0.997 0.995 0.998
|
||
𝑃2 , 𝑃 4 0.998 0.997 0.994 0.994 0.993 0.995 0.999 0.998 0.994 0.993 0.992 0.995
|
||
𝑃3 , 𝑃 4 0.999 0.997 0.994 0.991 0.986 0.993 0.998 0.998 0.997 0.996 0.995 0.997
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.999 0.997 0.995 0.994 0.990 0.995 0.999 0.998 0.995 0.995 0.995 0.996
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.998 0.994 0.990 0.989 0.987 0.992 0.998 0.995 0.992 0.989 0.987 0.992
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.999 0.996 0.993 0.990 0.985 0.993 0.999 0.997 0.993 0.991 0.989 0.994
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.998 0.998 0.996 0.992 0.989 0.995 0.999 0.998 0.998 0.996 0.995 0.997
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 1.000 0.999 0.998 0.998 0.997 0.998 1.000 1.000 0.999 0.998 0.997 0.999
|
||
|
||
|
||
|
||
|
||
Table 8
|
||
Results of resisting Gaussian low-pass and mean filtering attacks.
|
||
Fusion of Gaussian low-pass filtering Average Mean filtering Average
|
||
different images 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11
|
||
𝑃1 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.997 0.996 0.996 0.997
|
||
𝑃2 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.998 0.998 0.998 0.997 0.998
|
||
𝑃3 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.996 0.996 0.995 0.997
|
||
𝑃4 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.998 0.997 0.998
|
||
𝑃1 , 𝑃 2 1.000 1.000 1.000 1.000 1.000 1.000 0.996 0.995 0.994 0.992 0.990 0.993
|
||
𝑃1 , 𝑃 3 1.000 1.000 1.000 1.000 1.000 1.000 0.997 0.996 0.994 0.990 0.990 0.993
|
||
𝑃1 , 𝑃 4 1.000 1.000 1.000 1.000 1.000 1.000 0.995 0.994 0.991 0.990 0.984 0.991
|
||
𝑃2 , 𝑃 3 1.000 1.000 1.000 1.000 1.000 1.000 0.999 0.996 0.995 0.993 0.991 0.995
|
||
𝑃2 , 𝑃 4 0.999 0.999 0.999 0.999 0.999 0.999 0.995 0.993 0.991 0.991 0.989 0.992
|
||
𝑃3 , 𝑃 4 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.995 0.993 0.992 0.988 0.993
|
||
𝑃1 , 𝑃 2 , 𝑃 3 1.000 1.000 1.000 1.000 1.000 1.000 0.997 0.994 0.993 0.991 0.988 0.992
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.999 0.999 0.999 0.999 0.999 0.999 0.996 0.993 0.988 0.986 0.986 0.990
|
||
𝑃2 , 𝑃 3 , 𝑃 4 1.000 1.000 1.000 1.000 1.000 1.000 0.997 0.994 0.989 0.986 0.985 0.990
|
||
𝑃1 , 𝑃 3 , 𝑃 4 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.996 0.995 0.991 0.989 0.994
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 1.000 1.000 1.000 1.000 1.000 1.000 0.998 0.997 0.995 0.994 0.992 0.995
|
||
|
||
|
||
|
||
|
||
Table 9
|
||
Results of resisting offset and cropping attacks.
|
||
Fusion of Offset direction Average Cropping position Average
|
||
different images Shift right Shift left Shift up Shift down Upper left Upper left Upper left Center
|
||
2 columns 2 columns 2 rows 2 rows 1/16 1/8 1/4 1/4
|
||
𝑃1 0.995 0.993 0.992 0.994 0.993 0.976 0.963 0.923 0.933 0.949
|
||
𝑃2 0.997 0.995 0.995 0.997 0.996 0.992 0.987 0.962 0.979 0.980
|
||
𝑃3 0.996 0.992 0.993 0.993 0.993 0.983 0.973 0.922 0.955 0.958
|
||
𝑃4 0.993 0.995 0.998 0.997 0.996 0.979 0.963 0.932 0.952 0.957
|
||
𝑃1 , 𝑃 2 0.991 0.992 0.985 0.986 0.989 0.993 0.981 0.958 0.982 0.978
|
||
𝑃1 , 𝑃 3 0.985 0.984 0.988 0.984 0.985 0.980 0.970 0.941 0.968 0.965
|
||
𝑃1 , 𝑃 4 0.981 0.981 0.984 0.979 0.981 0.987 0.967 0.928 0.948 0.958
|
||
𝑃2 , 𝑃 3 0.989 0.993 0.992 0.986 0.990 0.989 0.979 0.960 0.971 0.975
|
||
𝑃2 , 𝑃 4 0.988 0.988 0.988 0.987 0.988 0.992 0.981 0.959 0.956 0.972
|
||
𝑃3 , 𝑃 4 0.986 0.989 0.988 0.982 0.986 0.987 0.962 0.925 0.930 0.951
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.989 0.989 0.988 0.985 0.988 0.991 0.981 0.952 0.975 0.975
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.986 0.987 0.985 0.982 0.985 0.991 0.974 0.945 0.964 0.969
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.987 0.987 0.987 0.981 0.986 0.989 0.974 0.948 0.951 0.966
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.986 0.982 0.986 0.979 0.983 0.979 0.960 0.924 0.953 0.954
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.992 0.995 0.992 0.989 0.992 0.993 0.983 0.958 0.963 0.974
|
||
|
||
|
||
|
||
attacks, as evidenced by its average NC value of 0.998. The main 5.5.3. Rotation attack
|
||
reasons for this are as follows: The CT transform can effectively capture The results of rotation attacks are summarized in Table 11. It can
|
||
the local features of the cover image, and the method’s resistance to be seen that the proposed method achieves strong resistance to rotation
|
||
scaling attacks is improved by the normalizing process based on image attack, with an average NC value of 0.964. It is mainly attributed to the
|
||
moments. These properties enable the proposed method to maintain fact that CHFMs possess rotational invariance when computing the LSs.
|
||
the stability of the extracted feature vectors when an image undergoes This property ensures that even if the LSs are rotated, the feature data
|
||
a scaling attack. Consequently, the proposed method has enhanced its can still be effectively extracted in the low-frequency part. Additionally,
|
||
ability to resist scaling attacks. the FFT transforms the amplitude sequence, further enhancing the
|
||
|
||
10
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 10
|
||
Results of resisting scaling attack.
|
||
Fusion of Scaling Average
|
||
different images Shrink Shrink Shrink Shrink Magnify Magnify Magnify Magnify
|
||
0.25x 0.4x 0.5x 0.7x 4x 2x 2.4x 1.3x
|
||
𝑃1 0.999 0.999 1.000 0.998 1.000 1.000 1.000 0.999 0.999
|
||
𝑃2 1.000 1.000 1.000 0.999 1.000 1.000 1.000 1.000 1.000
|
||
𝑃3 0.998 0.998 1.000 0.997 1.000 1.000 1.000 0.999 0.999
|
||
𝑃4 0.999 1.000 1.000 0.999 1.000 1.000 1.000 1.000 1.000
|
||
𝑃1 , 𝑃 2 0.998 0.993 1.000 0.994 1.000 1.000 0.998 0.999 0.998
|
||
𝑃1 , 𝑃 3 0.997 0.994 1.000 0.995 1.000 1.000 0.998 0.999 0.998
|
||
𝑃1 , 𝑃 4 0.996 0.988 0.999 0.992 1.000 1.000 0.999 0.999 0.997
|
||
𝑃2 , 𝑃 3 0.999 0.996 0.999 0.996 1.000 1.000 1.000 1.000 0.998
|
||
𝑃2 , 𝑃 4 0.997 0.992 0.999 0.994 1.000 1.000 0.999 0.999 0.997
|
||
𝑃3 , 𝑃 4 0.998 0.993 1.000 0.994 1.000 1.000 1.000 1.000 0.998
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.998 0.992 1.000 0.995 1.000 1.000 0.999 0.999 0.998
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.994 0.989 0.999 0.992 1.000 1.000 0.997 0.999 0.996
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.996 0.989 1.000 0.990 1.000 1.000 0.998 0.999 0.996
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.996 0.991 0.999 0.992 1.000 1.000 0.998 0.999 0.997
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 1.000 0.999 1.000 0.999 1.000 1.000 1.000 1.000 1.000
|
||
|
||
|
||
Table 11
|
||
Results of resisting rotation attack.
|
||
Fusion of Rotation angle Average
|
||
different images 10◦ 20◦ 30◦ 40◦ 50◦ 60◦ 70◦ 80◦ 90◦
|
||
𝑃1 0.973 0.958 0.956 0.955 0.956 0.960 0.965 0.980 0.996 0.967
|
||
𝑃2 0.984 0.970 0.958 0.940 0.941 0.958 0.972 0.984 0.999 0.967
|
||
𝑃3 0.986 0.981 0.982 0.979 0.981 0.982 0.987 0.988 0.995 0.985
|
||
𝑃4 0.975 0.951 0.937 0.924 0.924 0.941 0.961 0.979 0.997 0.954
|
||
𝑃1 , 𝑃2 0.998 1.000 1.000 1.000 0.993 0.994 0.998 0.999 0.993 0.997
|
||
𝑃1 , 𝑃3 0.981 0.969 0.967 0.962 0.958 0.963 0.961 0.971 0.992 0.969
|
||
𝑃1 , 𝑃4 0.973 0.965 0.962 0.950 0.938 0.947 0.954 0.973 0.994 0.962
|
||
𝑃2 , 𝑃3 0.980 0.959 0.950 0.934 0.935 0.949 0.958 0.975 0.998 0.960
|
||
𝑃2 , 𝑃4 0.969 0.948 0.925 0.912 0.899 0.925 0.954 0.969 0.995 0.944
|
||
𝑃3 , 𝑃4 0.978 0.957 0.937 0.922 0.916 0.939 0.952 0.974 0.996 0.952
|
||
𝑃1 , 𝑃2 , 𝑃3 0.978 0.962 0.957 0.947 0.942 0.944 0.956 0.977 0.994 0.962
|
||
𝑃1 , 𝑃2 , 𝑃4 0.969 0.968 0.954 0.944 0.933 0.937 0.949 0.971 0.991 0.957
|
||
𝑃2 , 𝑃3 , 𝑃4 0.972 0.948 0.933 0.920 0.918 0.936 0.949 0.973 0.994 0.949
|
||
𝑃1 , 𝑃3 , 𝑃4 0.978 0.964 0.961 0.955 0.950 0.951 0.958 0.979 0.988 0.965
|
||
𝑃1 , 𝑃2 , 𝑃3 , 𝑃4 0.978 0.969 0.956 0.944 0.938 0.950 0.959 0.979 0.999 0.963
|
||
|
||
|
||
|
||
rotational invariance in the frequency domain and thereby increasing staying above 0.970, and the performance difference between various
|
||
the robustness of the method against rotation attacks. Consequently, groups is not apparent. These results demonstrate that even when the
|
||
the proposed method improves the resistance to rotation attacks. number of fused images increases dramatically, the proposed method
|
||
can still effectively resist multiple types of attacks and can be applied
|
||
5.6. Combined attack to the fusion needs of different numbers of images.
|
||
|
||
To further measure the anti-attack capability of the method, the 5.8. Experiments on image datasets
|
||
cover image was subjected to combined attacks, and the corresponding
|
||
results are listed in Table 12. As shown in the table, the average NC To verify the generalizability of the proposed method, we conduct
|
||
value of the suggested method against the combined attacks is still as experiments on four benchmark image datasets: BossBase [44], BOWS-
|
||
high as 0.980. According to the results above, the proposed method 2 [45], COVID [46], and SIPI [47]. For the experiments, 100 images
|
||
is capable of resisting a range of combined attacks, in addition to were randomly selected from each image dataset for evaluation. The
|
||
conventional and geometric attacks, indicating that it can withstand proposed method is first used to construct a zero-watermarking image
|
||
various types of attacks and exhibits strong, robust performance. for each test image. Then, an anti-attack test is performed to quantify
|
||
the performance by calculating the NC value between the extracted
|
||
5.7. Impact of multiple images fusion on the performance of the proposed image and the original copyrighted image. The average test results
|
||
method and the standard deviation STD of the 100 images are shown in
|
||
Table 15. It can be seen that the average NC values of the proposed
|
||
To objectively evaluate the impact of multiple image fusion on method are always higher than 0.95, and the STDs are less than
|
||
the performance of the proposed method, nine sets of images were 0.032 in all datasets, indicating that the proposed method not only
|
||
first randomly selected from the image dataset BossBase [44], with exhibits excellent robustness on different datasets but also has excellent
|
||
the numbers of 5, 10, 15, 20, 30, 40, 60, 80, and 100, respectively. generalization ability. Although the experiments are conducted on
|
||
Then, for each set of images, a fused image is generated using the standard datasets, the possible attacks on real-world natural images
|
||
fusion technique described in Section 2.4. The proposed method is and medical images are simulated, which validates the ability of our
|
||
then utilized to construct a zero-watermarking image for the fused method to resist attacks and generalization. COVID [46] is a publicly
|
||
image and to perform experiments on various attacks. The experimental open dataset of chest X-rays and CT images of patients, containing 930
|
||
results are shown in Tables 13 and 14, from which it can be seen images. The proposed method demonstrates superior attack resistance
|
||
that the proposed method exhibits stable robustness under different on this dataset, indicating its potential application in real-world image
|
||
numbers of image fusion conditions, with the average NC value always copyright protection scenarios.
|
||
|
||
11
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 12
|
||
Results of resisting combination attacks.
|
||
Fusion of Type of combination attack Average
|
||
different images Rotation (10◦ ) JPEG (QF = 5) JPEG (QF = 10) Gaussian noise Gaussian noise Median filtering Median filtering
|
||
+ Scaling + Scaling + Wiener (0.2) + Rotation (0.2) + Wiener (3 × 3) + Salt & (3 × 3) + Gaussian
|
||
(Shrink 0.25x) (Shrink 0.25x) filtering (3 × 3) (10◦ ) filtering (3 × 3) pepper noise (0.2) noise (0.2)
|
||
𝑃1 0.979 0.987 0.989 0.903 0.902 0.920 0.903 0.943
|
||
𝑃2 0.967 0.997 0.998 0.986 0.985 0.990 0.985 0.991
|
||
𝑃3 0.957 0.995 0.995 0.954 0.955 0.958 0.953 0.972
|
||
𝑃4 0.979 0.989 0.995 0.953 0.953 0.967 0.953 0.973
|
||
𝑃1 , 𝑃 2 0.973 0.992 0.996 0.970 0.970 0.978 0.970 0.981
|
||
𝑃1 , 𝑃 3 0.967 0.991 0.992 0.958 0.956 0.968 0.957 0.974
|
||
𝑃1 , 𝑃 4 0.956 0.987 0.994 0.960 0.964 0.973 0.962 0.977
|
||
𝑃2 , 𝑃 3 0.978 0.996 0.998 0.984 0.985 0.987 0.986 0.990
|
||
𝑃2 , 𝑃 4 0.975 0.993 0.999 0.982 0.980 0.983 0.978 0.987
|
||
𝑃3 , 𝑃 4 0.967 0.988 0.998 0.969 0.970 0.973 0.971 0.981
|
||
𝑃1 , 𝑃 2 , 𝑃 3 0.977 0.992 0.997 0.968 0.968 0.977 0.970 0.981
|
||
𝑃1 , 𝑃 2 , 𝑃 4 0.963 0.994 0.994 0.970 0.969 0.975 0.970 0.981
|
||
𝑃2 , 𝑃 3 , 𝑃 4 0.964 0.994 0.995 0.978 0.976 0.980 0.975 0.985
|
||
𝑃1 , 𝑃 3 , 𝑃 4 0.976 0.985 0.991 0.959 0.960 0.963 0.960 0.974
|
||
𝑃1 , 𝑃 2 , 𝑃 3 , 𝑃 4 0.978 0.995 0.999 0.982 0.983 0.983 0.983 0.989
|
||
|
||
|
||
Table 13
|
||
Experimental results of multi-image fusion against common attacks.
|
||
Number of JPEG compression Median filtering Wiener filtering Gaussian low-pass Mean filtering Gaussian
|
||
fusion images (QF = 15) (3 × 3) (3 × 3) filtering (3 × 3) (3 × 3) noise (0.1)
|
||
5 images 0.998 1.000 1.000 1.000 0.994 0.994
|
||
10 images 0.998 1.000 1.000 1.000 0.998 0.989
|
||
15 images 1.000 1.000 1.000 1.000 0.999 0.991
|
||
20 images 0.997 0.999 0.999 1.000 0.996 0.989
|
||
30 images 0.996 1.000 1.000 1.000 0.996 0.990
|
||
40 images 0.999 1.000 1.000 1.000 0.996 0.990
|
||
60 images 0.997 0.999 0.999 1.000 0.996 0.980
|
||
80 images 0.998 1.000 1.000 1.000 0.990 0.979
|
||
100 images 0.999 1.000 1.000 1.000 0.998 0.992
|
||
|
||
|
||
Table 14
|
||
Experimental results of multi-image fusion against geometric attack.
|
||
Number of Salt & pepper Speckle Gaussian white Rotation Scaling attack Cropping attack
|
||
fusion images noise (0.1) noise (0.1) noise (0.1,0.05,0) attack (10◦ ) (Shrink 0.25x) (Upper left 1/16)
|
||
5 images 0.994 0.997 0.994 0.970 0.998 0.992
|
||
10 images 0.992 0.994 0.994 0.974 0.999 0.996
|
||
15 images 0.993 0.994 0.993 0.983 1.000 0.998
|
||
20 images 0.982 0.994 0.988 0.974 0.997 0.994
|
||
30 images 0.992 0.991 1.000 0.992 1.000 0.999
|
||
40 images 0.978 0.996 0.994 0.976 0.998 0.994
|
||
60 images 0.990 0.994 0.991 0.971 0.997 0.990
|
||
80 images 0.980 0.993 0.991 0.974 0.998 0.996
|
||
100 images 0.979 0.995 0.993 0.990 1.000 0.996
|
||
|
||
|
||
|
||
5.9. Comparison with similar methods in [22] employs FQGPCET, a nonlinear transformation method based
|
||
on quaternions and polar coordinates, which is more sensitive to noise
|
||
To highlight the superiority of the proposed method, six representa- and shifts due to its nonlinear nature, potentially leading to distortion
|
||
tive similar methods were selected for comparison experiments under of the extracted features. The method in [25] is less robust to cropping
|
||
the same conditions, and the results are shown in Table 16, where the and offset attacks due to the sensitivity of the polar harmonic invariant
|
||
proposed method is generally superior to the six similar methods in moments to cropping and offset. Specifically, the NC value obtained
|
||
terms of robustness. The reasons for this can mainly be attributed to by the method is only 0.872 for the center 1/16 cropping attack
|
||
the following four aspects: First, the methods in [16–19] all use block since the cropping part is not used in the computation. However, the
|
||
processing, and the ‘‘block effect’’ introduced by these methods can proposed method constructs binary eigenvectors using CHFMs and FFTs
|
||
lead to discontinuities or blurring of the boundaries between neigh- based on frequency-domain feature extraction. This makes the proposed
|
||
boring image blocks, which reduces the stability and accuracy of the method robust to this type of attack. Third, compared with the NSST
|
||
feature vectors, whereas the proposed method generates the amplitude used by the method in [16], the CT transform has sparse properties
|
||
sequences by calculating the CHFMs of the effective regions of the and better detail characterization capabilities. Thus, it can filter or
|
||
LSs and performs the FFT transform. The proposed method not only perform specific processing to reduce the noise in the cover image to
|
||
avoids the ‘‘block effect’’ inherent in these methods but also leverages fewer coefficients, allowing for effective noise suppression and thereby
|
||
the rotational invariance and scaling invariance of CHFMs to construct improving the ability to resist noise attacks. Fourth, unlike the DTCWT
|
||
feature vectors by computing CHFMs in the effective regions of the LSs. used in [17], the proposed method constructs features by introducing
|
||
Performing the FFT transform further enhances the method’s resistance the CT transform, which enables the extraction of more stable principal
|
||
to geometric attacks. Second, the methods in [22,25] both construct component information. When subjected to noise, filtering, and JPEG
|
||
a zero-watermarking image based on image moments. The method compression, the CT transform can effectively remove high-frequency
|
||
|
||
12
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 15
|
||
Comparative experimental results for four different datasets.
|
||
Type of attack BossBase BOWS-2 COVID SIPI
|
||
Average NC STD Average NC STD Average NC STD Average NC STD
|
||
JPEG compression (QF = 5) 0.9899 0.0185 0.9962 0.0038 0.9971 0.0036 0.9963 0.0036
|
||
JPEG compression (QF = 15) 0.9973 0.0052 0.9989 0.0012 0.9991 0.0014 0.9990 0.0013
|
||
Median filtering (3 × 3) 0.9985 0.0020 0.9992 0.0008 0.9996 0.0008 0.9991 0.0005
|
||
Median filtering (11 × 11) 0.9933 0.0069 0.9961 0.0026 0.9985 0.0020 0.9965 0.0030
|
||
Wiener filtering (3 × 3) 0.9997 0.0005 0.9998 0.0003 0.9999 0.0002 0.9999 0.0004
|
||
Wiener filtering (11 × 11) 0.9979 0.0013 0.9987 0.0009 0.9995 0.0009 0.9994 0.0006
|
||
Gaussian low-pass filtering (3 × 3) 0.9999 0.0003 0.9999 0.0002 0.9999 0.0001 0.9999 0.0002
|
||
Gaussian low-pass filtering (11 × 11) 0.9998 0.0004 0.9996 0.0003 0.9995 0.0006 0.9996 0.0003
|
||
Mean filtering (3 × 3) 0.9985 0.0009 0.9987 0.0008 0.9993 0.0010 0.9982 0.0010
|
||
Mean filtering (11 × 11) 0.9946 0.0026 0.9956 0.0021 0.9977 0.0028 0.9961 0.0026
|
||
Gaussian noise (0.1) 0.9665 0.0309 0.9846 0.0114 0.9881 0.0116 0.9812 0.0212
|
||
Gaussian noise (0.5) 0.9528 0.0244 0.9510 0.0302 0.9609 0.0318 0.9542 0.0245
|
||
Salt & pepper noise (0.1) 0.9786 0.0240 0.9912 0.0065 0.9932 0.0070 0.9802 0.0390
|
||
Salt & pepper noise (0.5) 0.9537 0.0230 0.9617 0.0267 0.9710 0.0269 0.9622 0.0253
|
||
Speckle noise (0.1) 0.9948 0.0041 0.9946 0.0035 0.9962 0.0036 0.9900 0.0017
|
||
Speckle noise (0.5) 0.9868 0.0084 0.9845 0.0081 0.9889 0.0088 0.9859 0.0106
|
||
Gaussian white noise (0.1,0.05,0) 0.9750 0.0329 0.9936 0.0088 0.9927 0.0095 0.9815 0.0291
|
||
Gaussian white noise (0.5,0.25,0) 0.9558 0.0079 0.9714 0.0242 0.9765 0.0252 0.9744 0.0214
|
||
Rotation attack (10◦ ) 0.9695 0.0086 0.9703 0.0071 0.9824 0.0120 0.9844 0.0140
|
||
Rotation attack (80◦ ) 0.9694 0.0083 0.9716 0.0076 0.9644 0.0304 0.9714 0.0242
|
||
Scaling attack (Shrink 0.25x) 0.9990 0.0012 0.9994 0.0010 0.9991 0.0014 0.9991 0.0010
|
||
Scaling attack (Magnify 4x) 0.9991 0.0009 0.9990 0.0012 0.9999 0.0002 0.9998 0.0003
|
||
Cropping attack (Upper left 1/16) 0.9963 0.0053 0.9988 0.0012 0.9971 0.0032 0.9969 0.0028
|
||
Cropping attack (Upper left 1/8) 0.9766 0.0072 0.9859 0.0106 0.9872 0.0186 0.9845 0.0081
|
||
|
||
|
||
Table 16
|
||
Experimental results of the proposed method and six similar methods.
|
||
Type of attack Method Method Method Method Method Method Proposed
|
||
[16] [17] [18] [19] [22] [25] method
|
||
JPEG compression (QF = 15) 0.983 0.985 0.995 0.989 0.997 0.996 0.998
|
||
Median filtering (3 × 3) 0.996 0.989 0.998 0.980 0.972 1.000 0.999
|
||
Wiener filtering (3 × 3) 0.998 0.995 1.000 0.999 0.996 1.000 1.000
|
||
Gaussian low-pass filtering (3 × 3) 0.999 0.999 1.000 0.996 0.998 1.000 1.000
|
||
Mean filtering (3 × 3) 0.997 0.995 0.995 0.989 0.992 0.979 0.998
|
||
Gaussian noise (0.1) 0.987 0.979 0.985 0.981 0.966 0.944 0.991
|
||
Salt & pepper noise (0.1) 0.962 0.939 0.964 0.997 0.959 0.954 0.993
|
||
Speckle noise (0.1) 0.969 0.966 0.974 0.987 0.971 0.980 0.997
|
||
Gaussian white noise (0.1, 0.05, 0) 0.988 0.925 0.984 0.960 0.958 0.940 0.993
|
||
Rotation attack (10◦ ) 0.899 0.939 0.890 0.896 0.985 0.985 0.984
|
||
Scaling attack (Shrink 0.25x) 0.997 0.995 1.000 0.981 0.992 0.997 1.000
|
||
Cropping attack (Upper left 1/16) 0.998 0.996 0.976 0.998 1.000 1.000 0.992
|
||
Offset attack (Shift up 2 rows) 0.98 0.969 0.977 0.981 0.973 0.950 0.995
|
||
|
||
|
||
Table 17
|
||
Summary of improvement rates from Table 16.
|
||
Type of attack Method [16] Method [17] Method [18] Method [19] Method [22] Method [25] Average
|
||
JPEG compression (QF = 15) 0.910% 1.114% 0.910% 1.012% 0.706% 0.706% 0.893%
|
||
Median filtering (3 × 3) 0.909% 1.835% 0.706% 2.567% 2.884% −0.100% 1.467%
|
||
Wiener filtering (3 × 3) 0.908% 0.806% 0.000% 0.806% 0.806% 0.000% 0.555%
|
||
Gaussian low-pass filtering (3 × 3) 0.100% 0.100% 0.000% 0.402% 0.200% 0.000% 0.134%
|
||
Mean filtering (3 × 3) 0.504% 0.302% 0.302% 0.910% 0.605% 2.675% 0.883%
|
||
Gaussian noise (0.1) 0.405% 1.226% 0.814% 0.916% 2.588% 4.757% 1.784%
|
||
Salt & pepper noise (0.1) 3.115% 5.751% 3.008% 3.762% 3.545% 4.088% 3.878%
|
||
Speckle noise (0.1) 2.890% 3.209% 3.746% 4.180% 2.678% 1.735% 3.073%
|
||
Gaussian white noise (0.1,0.05,0) 1.120% 7.701% 1.223% 3.438% 4.088% 5.638% 3.868%
|
||
Rotation attack (10◦ ) 9.821% 9.333% 10.562% 10.438% 0.306% 0.204% 6.777%
|
||
Scaling attack (Shrink 0.25x) 0.705% 0.908% 0.000% 1.833% 0.908% 0.705% 0.843%
|
||
Cropping attack (Upper left 1/16) −0.601% 0.405% 0.303% 0.609% −0.800% −0.800% −0.147%
|
||
Offset attack (Shift up 2 rows) 3.323% 2.683% 1.842% 2.577% 3.323% 4.737% 3.081%
|
||
|
||
|
||
|
||
signals while retaining the low-frequency signals that represent the 5.10. Ablation experiment
|
||
cover image, resulting in a more stable extracted feature vector. In
|
||
summary, our method is robust against most attacks compared to In this study, a zero-watermarking method that combines CT,
|
||
similar methods. CHFMs, and FFT is proposed. The experimental results show that it
|
||
Based on the data in Table 16, the improvement rate of the proposed provides excellent performance. In general, CT is a multi-scale trans-
|
||
method compared to the other methods is given in Table 17. It can be form that can resist noise and filtering attacks. However, it is difficult
|
||
seen that, for most attacks, the proposed method outperforms various to adaptively adjust due to the fixed orientation of its basis functions,
|
||
techniques with an average improvement rate of approximately 2%, resulting in limited adaptive capability against geometric attacks such
|
||
indicating that the proposed method is effective. as rotation. CHFMs utilize the rotational and scaling invariance of
|
||
|
||
13
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 18
|
||
Results of ablation experiments.
|
||
Type of attack Our method Without CT Without CHFMs Without FFT
|
||
JPEG compression (QF = 15) 0.998 0.973 0.988 0.981
|
||
Median filtering (3 × 3) 0.999 0.964 0.998 0.991
|
||
Wiener filtering (3 × 3) 1.000 0.968 0.999 0.997
|
||
Gaussian low-pass filtering (3 × 3) 1.000 0.989 0.999 0.990
|
||
Mean filtering (3 × 3) 0.998 0.950 0.998 0.989
|
||
Gaussian noise (0.1) 0.991 0.884 0.968 0.988
|
||
Salt & pepper noise (0.1) 0.993 0.798 0.975 0.977
|
||
Speckle noise (0.1) 0.997 0.827 0.986 0.967
|
||
Gaussian white noise (0.1,0.05,0) 0.993 0.827 0.977 0.974
|
||
Rotation attack (10◦ ) 0.984 0.982 0.893 0.964
|
||
Scaling attack (Shrink 0.25x) 1.000 0.997 0.902 0.970
|
||
Cropping attack (Upper left 1/16) 0.992 0.981 0.881 0.906
|
||
|
||
( )
|
||
moments to resist geometric distortion attacks; however, their global The space complexities in [16–19,22], and [25] are 𝑂 73 𝑁 2 +
|
||
( ) ( ) ( ) 16
|
||
integration property makes them highly sensitive to local distortions,
|
||
𝑂(5𝑛2 ), 𝑂 513 𝑁 2 + 𝑂(5𝑛2 ), 𝑂 593 𝑁 2 + 𝑂(5𝑛2 ), 𝑂 657 𝑁 2 + 𝑂(6𝑛2 ),
|
||
such as compression and noise. FFT-based global spectral analysis ( ) 64 ( 64) 64
|
||
|
||
enhances robustness to geometric attacks and resists interference in the 𝑂 19364
|
||
𝑁 2 + 𝑂(5𝑛2 ), and 𝑂 69 𝑁 2 + 𝑂(5𝑛2 ), respectively.
|
||
64
|
||
frequency domain, but it is weak against localized cropping attacks. In summary, the computational complexity of the proposed method
|
||
To verify how CT, CHFMs, and FFT enhance robustness in our is approximately 𝑂(𝑁 3 ), and the space complexity is 𝑂(𝑁 2 ). The in-
|
||
method, we performed ablation experiments. The experimental results crease in computational complexity of the proposed method com-
|
||
are shown in Table 18. It can be seen that these three transforms are pared to similar methods is primarily due to the introduction of im-
|
||
complementary in their ability to resist attacks. CT provides resistance age moments, which enhance resistance against geometric attacks. In
|
||
to noise and filtering attacks through multi-scale frequency domain fea- terms of space complexity, the proposed method is comparable to its
|
||
tures; CHFM provides resistance to geometric attacks, such as rotation counterparts, indicating that the fusion technique effectively mitigates
|
||
and scaling, through geometric invariant features; and FFT enhances the problem of increasing storage overhead as the number of images
|
||
resistance to conventional and geometric attacks through frequency increases.
|
||
domain stability. These three transformations can provide resilience
|
||
against different types of attacks separately, and their synergistic effect 5.12. Key space and sensitivity analysis
|
||
together enhances the overall robustness of our method.
|
||
|
||
A simple image encryption method based on the Lorenz chaotic
|
||
5.11. Complexity comparison
|
||
system and the Fibonacci Q-matrix is proposed to improve the security
|
||
of the original binary copyrighted images. Next, the security of the
|
||
Table 19 summarizes the average running time of the seven methods
|
||
proposed image encryption scheme is analyzed in terms of key space
|
||
for processing 100 images under the same experimental conditions.
|
||
It can be seen that methods [16,17,25] have the shortest running and sensitivity.
|
||
time because they are zero-watermarking methods for a single image;
|
||
methods [18,19] have an increased running time due to the need 5.12.1. Key space
|
||
to perform operations such as fusion and normalization on multiple In general, the security of an encryption scheme depends critically
|
||
images. The method [22] has a relatively long running time due to the on the quantity of its key space. A sufficiently large key space is essen-
|
||
need to compute image moments, even though it only processes a single tial to provide resistance against exhaustive attack. The security of the
|
||
image. The proposed method has the longest running time among the proposed encryption scheme primarily relies on the initial conditions of
|
||
seven methods because it combines operations such as multiple images the Lorenz chaotic system, as described by Eq. (1). In a 64-bit operating
|
||
fusion, CT, CHFMs, and FFT. In practice, the runtime of the proposed system environment, each parameter is represented as a 64-bit double-
|
||
method is feasible within 60 s on an ordinary personal computer. precision floating-point number. Consequently, the total key space
|
||
Taking this into account, the running time of the proposed method is amounts to (264 )3 = 2192 . A key space of this magnitude is considered
|
||
approximately 31.5 s, which is within the acceptable level. adequate to ensure the cryptographic strength of the encryption scheme
|
||
In the experiments, the sizes of the original cover image and the against exhaustive attack, thereby enhancing its robustness in practical
|
||
copyrighted image are assumed to be 𝑁 × 𝑁 and 𝑛 × 𝑛, respectively. applications.
|
||
The proposed method mainly consists of the following steps: image
|
||
fusion, CT, CHFMs, FFT, copyrighted image encryption, and zero- 5.12.2. Sensitivity analysis
|
||
watermarking generation.
|
||
( )The (computational
|
||
) ( complexities
|
||
) of these
|
||
Key sensitivity is regarded as one of the fundamental metrics for
|
||
steps are 𝑂(4𝑁 2 ), 𝑂 14 𝑁 2 , 𝑂 18 𝑁 3 , 𝑂 641
|
||
𝑁 2 log 𝑁 , 𝑂(2𝑛2 ), and evaluating the security of cryptographic schemes. A cryptosystem with
|
||
𝑂(𝑛2 ), respectively. If some details of the method implementation high security strength should exhibit significant sensitivity to even
|
||
are ignored, the overall computational
|
||
( complexity of the proposed ) minor perturbations in the key. That is, a slight modification in the key
|
||
method can be approximated as 𝑂 18 𝑁 3 + 64 1
|
||
𝑁 2 log 𝑁 + 17
|
||
4
|
||
𝑁 2 + 3𝑛2 . should prevent the decryption algorithm from successfully recovering
|
||
Accordingly, the computational complexities in [16–19,22], and [25] the original plaintext image. The experimental results, depicted in Fig.
|
||
( ) ( ) ( ) 12, demonstrate that when the decryption key matches the encryption
|
||
192 2 205 2 624 2
|
||
are 𝑂 𝑁 + 𝑂(2𝑛2 ), 𝑂 𝑁 + 𝑂(2𝑛2 ), 𝑂 𝑁 + 𝑂(2𝑛2 ),
|
||
64) 64 64 key precisely, the decrypted image is perfectly consistent with the
|
||
( ( ) ( )
|
||
624 2 1 2 192 2 original. However, when a subtle perturbation is introduced to the
|
||
𝑂 𝑁 + 𝑂(2𝑛2 ), 𝑂 𝑁 + 2𝑁 2 log 𝑁 + 𝑂(2𝑛2 ), and 𝑂 𝑁
|
||
64 32 64 decryption key parameter 𝑥, i.e., 𝑥′1 = 𝑥1 +10−16 , the resulting decrypted
|
||
+ 𝑂(2𝑛2 ), respectively. Similarly, the space complexities
|
||
( of) the (six steps
|
||
) image becomes severely distorted and entirely unrecognizable to the
|
||
of the proposed method are 𝑂(4𝑁 2 ), 𝑂(2𝑁 2 ), 𝑂 45 64
|
||
𝑁 2 , 𝑂 16 1
|
||
𝑁2 , human eye. This result indicates that the proposed image encryption
|
||
𝑂(3𝑛2 ), and 𝑂(4𝑛2 ), respectively. The overall space complexity
|
||
( of the
|
||
) scheme possesses a high level of key sensitivity, thereby enhancing its
|
||
proposed method can be approximately expressed as 433 64
|
||
𝑁 2 + 7𝑛2 . resistance against key-related attacks.
|
||
|
||
14
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Table 19
|
||
Comparison of the running times of seven similar methods.
|
||
Type of attack Method [16] Method [17] Method [18] Method [19] Method [22] Method [25] Proposed method
|
||
Running time (s) 0.846 1.066 4.326 4.612 5.004 0.907 31.522
|
||
|
||
|
||
|
||
|
||
Fig. 12. Experimental results of key sensitivity analysis.
|
||
|
||
|
||
5.13. Discussions 6. Conclusion
|
||
|
||
A robust zero-watermarking method is proposed considering the Aiming to address the limitations of existing zero-watermarking
|
||
advantages of CT, CHFMs, and FFT. Experimental results show the methods, which often exhibit poor performance against specific at-
|
||
superior attack resistance of the proposed method against conventional tacks and can only process a single image, a multi-image robust zero-
|
||
image processing, geometric attacks, and combinatorial attacks. The watermarking method based on CT, CHFMs, and FFT is proposed.
|
||
ablation experimental results show that without CT, the ability to resist First, a high-dimensional chaotic system and a Fibonacci Q-matrix
|
||
noise attacks is weaker; without CHFMs, the ability to resist geometric are employed to encrypt a copyrighted image, thereby enhancing the
|
||
attacks decreases significantly; and without FFT, the ability to resist security of the proposed method. Second, multiple images are fused
|
||
noise and cropping attacks decreases slightly. In addition, compared into a single image, and the advantages of the CT, CHFMs, and FFT are
|
||
with the methods in [16–19,22,25], our proposed method achieves combined to construct a feature vector. Numerous experimental results
|
||
superior robustness against most attacks. Although these results demon- demonstrate that the NC values remain above 0.95 for conventional
|
||
strate the effectiveness of the proposed method, its limitations remain image processing attacks, geometric attacks, and combined attacks,
|
||
in the following three aspects. indicating the proposed method is effective against various types of
|
||
attacks. Compared to the latest representative methods, it achieves
|
||
superior performance with an average improvement of approximately
|
||
5.13.1. Ability to resist Gaussian noise
|
||
2%. The ablation experiments also confirmed the effectiveness of the
|
||
From the experimental results in Tables 5 and 6, it can be concluded combined approach, which utilized CT, CHFMs, and FFT. Although
|
||
that the proposed zero-watermarking scheme exhibits strong robustness the proposed method can withstand most attacks, its performance still
|
||
against Speckle noise and Salt & pepper noise. However, its perfor- needs improvement. Overall, the limitations of the proposed method
|
||
mance under Gaussian noise is not satisfactory, indicating a limited are primarily reflected in three aspects. First, the extracted feature
|
||
resistance to such interference. Consequently, the method’s capability vectors are sensitive to noise, resulting in insufficient resilience against
|
||
to withstand Gaussian noise attacks requires further improvement to attacks such as Gaussian noise. Second, the computational load associ-
|
||
enhance its overall robustness. ated with using CHFMs is high, making it less suitable for real-time
|
||
applications. Third, the current design is optimized for images and
|
||
5.13.2. Low efficiency in calculating CHFMs does not directly support videos. To address the limitations above,
|
||
From the experimental results in Table 19, it can be concluded future work may be focused on the following three perspectives. First,
|
||
that the proposed zero-watermarking method requires approximately explore the construction of noise-robust feature vectors using advanced
|
||
30 s to run on a general-purpose personal computer, indicating that feature extraction methods to enhance resistance against noise attacks.
|
||
it is not directly applicable to real-time multimedia streaming envi- Second, improve the computational approach for CHFMs to enhance
|
||
ronments or large datasets. Experiments revealed that the computation efficiency, enabling the proposed method to be applied in scenarios
|
||
of CHFMs constitutes the most time-consuming component in the pro- with high time-sensitivity requirements. Third, attempt to adapt the
|
||
posed method, accounting for the majority of the overall execution proposed method for video by considering its unique spatial and tem-
|
||
time. Efficiently computing CHFMs to reduce runtime further is a key poral characteristics. Additionally, we plan to integrate blockchain and
|
||
issue to be addressed by our method, enabling it to meet real-time smart contract technology to create a more comprehensive copyright
|
||
requirements. protection model.
|
||
|
||
|
||
5.13.3. Scalability of the proposed method CRediT authorship contribution statement
|
||
The proposed zero-watermarking generation framework is primarily
|
||
designed for cover image; therefore, it cannot be directly extended to Xinhui Lu: Writing – original draft, Software, Methodology.
|
||
video watermarking. Evidently, video covers are not only composed Guangyun Yang: Visualization, Methodology. Yu Lu: Visualization,
|
||
of individual frames but also possess inherent relationships between Methodology. Xiangguang Xiong: Writing – review & editing,
|
||
adjacent frames. Applying the proposed technique directly to video Supervision, Methodology.
|
||
scenes often yields unsatisfactory performance. In addition, the zero-
|
||
watermarking signal generated by the proposed method is stored Declaration of competing interest
|
||
in a third-party trusted IPR, without considering integration with
|
||
blockchain technology. The extension of the proposed method to video The authors declare that they have no known competing financial
|
||
applications and its integration with blockchain technology would be interests or personal relationships that could have appeared to
|
||
one of the future research perspectives worthy of in-depth exploration. influence the work reported in this paper.
|
||
|
||
15
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
Acknowledgments [19] B. Wang, W. Wang, P. Zhao, A zero-watermark algorithm for multiple im-
|
||
ages based on visual cryptography and image fusion, J. Vis. Commun. Image
|
||
Represent. 87 (2022) 103569, http://dx.doi.org/10.1016/j.jvcir.2022.103569.
|
||
This work was supported in part by the Natural Science Foundation
|
||
[20] X. Wu, J. Li, A. Bhatti, W. Chen, Logistic map and Contourlet-based robust
|
||
of Science and Technology Department of Guizhou Province, China zero watermark for medical images, in: Innovation in Medicine and Health-
|
||
(ZK[2023] 252), the Natural Science Research Project of Guizhou care Systems, and Multimedia: Proceedings of KES-InMed-19 and KES-IIMSS-19
|
||
Provincial Department of Education, China ([2023]010), the National Conferences, 2019, pp. 115–123, http://dx.doi.org/10.1007/978-981-13-8566-
|
||
Natural Science Foundation of China (U22A2026), and the Qiankehe 7_11.
|
||
Platform Talent Foundation of Science and Technology Department of [21] P. Meesala, M. Roy, M. Thounaojam, A robust medical image zero-watermarking
|
||
algorithm using Collatz and Fresnelet transforms, J. Inf. Secur. Appl. 85 (2024)
|
||
Guizhou Province, China (BQW[2024] 015). 103855, http://dx.doi.org/10.1016/j.jisa.2024.103855.
|
||
[22] Y. Yang, R. Qi, P. Niu, Y. Wang, Color image zero-watermarking based on fast
|
||
Data availability quaternion generic polar complex exponential transform, Signal Process., Image
|
||
Commun. 82 (2020) 115747, http://dx.doi.org/10.1016/j.image.2019.115747.
|
||
Data will be made available on request. [23] B. Xiao, F. Ma, X. Wang, Image analysis by Bessel–Fourier moments, Pattern
|
||
Recognit. 43 (8) (2010) 2620–2629, http://dx.doi.org/10.1016/j.patcog.2010.03.
|
||
013.
|
||
[24] G. Gao, G. Jiang, Bessel-Fourier moment-based robust image zero-watermarking,
|
||
References
|
||
Multimedia Tools Appl. 74 (2015) 841–858, http://dx.doi.org/10.1007/s11042-
|
||
013-1701-8.
|
||
[1] Q. Li, B. Ma, X. Wang, C. Wang, S. Gao, Image steganography in color
|
||
[25] A. Dash, K. Naik, Zero watermarking scheme based on Polar Harmonic Fourier
|
||
conversion, IEEE Trans. Circuits Syst. II: Express Briefs 71 (1) (2023) 106–110,
|
||
moments, in: International Conference on Computing, Communication and
|
||
http://dx.doi.org/10.1109/TCSII.2023.3300330.
|
||
Learning, 2022, pp. 162–171, http://dx.doi.org/10.1007/978-3-031-21750-0_14.
|
||
[2] T. Yang, H. Wu, B. Yi, G. Feng, X. Zhang, Semantic-preserving linguistic
|
||
[26] S. Chen, A. Malik, X. Zhang, G. Feng, H. Wu, A fast method for robust video
|
||
steganography by pivot translation and semantic-aware bins coding, IEEE Trans.
|
||
watermarking based on Zernike moments, IEEE Trans. Circuits Syst. Video
|
||
Dependable Secur. Comput. 21 (1) (2023) 139–152, http://dx.doi.org/10.1109/
|
||
Technol. 33 (12) (2023) 7342–7353, http://dx.doi.org/10.1109/TCSVT.2023.
|
||
TDSC.2023.3247493.
|
||
3281618.
|
||
[3] Q. Li, B. Ma, X. Fu, X. Wang, C. Wang, X. Li, Robust image steganography
|
||
[27] H. Zhu, Y. Yang, Z. Gui, Y. Zhu, Z. Chen, Image analysis by generalized
|
||
via color conversion, IEEE Trans. Circuits Syst. Video Technol. (2024) http:
|
||
Chebyshev–Fourier and generalized pseudo-Jacobi–Fourier moments, Pattern
|
||
//dx.doi.org/10.1109/TCSVT.2024.3466961.
|
||
Recognit. 51 (2016) 1–11, http://dx.doi.org/10.1016/j.patcog.2015.09.018.
|
||
[4] Y. Chen, A. Malik, H. Wang, B. He, Y. Zhou, H. Wu, Enhancing robustness in
|
||
video data hiding against recompression with a wide parameter range, J. Inf. [28] C. Gong, J. Liu, M. Gong, B. Li, A. Bhatti, X. Ma, Robust medical zero-
|
||
Secur. Appl. 83 (2024) 103796, http://dx.doi.org/10.1016/j.jisa.2024.103796. watermarking algorithm based on Residual-DenseNet, IET Biom. 11 (6) (2022)
|
||
[5] H. Tao, L. Chongmin, M. Zain, N. Abdalla, Robust image watermarking theories 547–556, http://dx.doi.org/10.1049/bme2.12100.
|
||
and techniques: A review, J. Appl. Res. Technol. 12 (1) (2014) 122–138, http: [29] Q. He, Y. He, T. Luo, Y. Song, Shrinkage and redundant feature elimination
|
||
//dx.doi.org/10.1016/S1665-6423(14)71612-8. network-based robust image zero-watermarking, Symmetry 15 (5) (2023) 964,
|
||
[6] P. Khare, K. Srivastava, A secured and robust medical image watermarking http://dx.doi.org/10.3390/sym15050964.
|
||
approach for protecting integrity of medical images, Trans. Emerg. Telecommun. [30] Y. Liu, C. Wang, M. Lu, J. Yang, J. Gui, S. Zhang, From simple to complex
|
||
Technol. 32 (2) (2021) e3918, http://dx.doi.org/10.1002/ett.3918. scenes: Learning robust feature representations for accurate human parsing,
|
||
[7] H. Ren, A. Yan, L. Li, Z. Zhang, N. Li, C. Gao, Are you copying my prompt? IEEE Trans. Pattern Anal. Mach. Intell. (2024) http://dx.doi.org/10.1109/TPAMI.
|
||
Protecting the copyright of vision prompt for VPaaS via watermarking, Com- 2024.3366769.
|
||
put. Stand. Interfaces 94 (2025) 103992, http://dx.doi.org/10.1016/j.csi.2025. [31] C. Wang, X. Li, Z. Xia, Q. Li, H. Zhang, J. Li, B. Han, B. Ma, HIWANet: A
|
||
103992. high imperceptibility watermarking attack network, Eng. Appl. Artif. Intell. 133
|
||
[8] P. Ye, Z. Li, Z. Yang, P. Chen, Z. Zhang, N. Li, J. Zheng, Periodic watermarking (2024) 108039, http://dx.doi.org/10.1016/j.engappai.2024.108039.
|
||
for copyright protection of large language models in cloud computing security, [32] Y. Liu, L. Zhang, H. Wu, Z. Wang, X. Zhang, Reducing High-Frequency artifacts
|
||
Comput. Stand. Interfaces 94 (2025) 103983, http://dx.doi.org/10.1016/j.csi. for Generative model watermarking via Wavelet transform, IEEE Internet Things
|
||
2025.103983. J. 11 (10) (2024) 18503–18515, http://dx.doi.org/10.1109/JIOT.2024.3363613.
|
||
[9] Q. Wen, P. Sun, S. Wang, Concept and application of zero watermarking, Acta [33] Y. Liu, H. Wu, X. Zhang, Robust and imperceptible black-box DNN watermarking
|
||
Automat. Sinica (02) (2003) 214–216, http://dx.doi.org/10.3321/j.issn:0372- based on Fourier perturbation analysis and frequency sensitivity clustering, IEEE
|
||
2112.2003.02.015. Trans. Dependable Secur. Comput. (2024) http://dx.doi.org/10.1109/TDSC.2024.
|
||
[10] J. Yang, K. Hu, X. Wang, H. Wang, Q. Liu, Y. Mao, An efficient and robust zero 3384416.
|
||
watermarking algorithm, Multimedia Tools Appl. 81 (14) (2022) 20127–20145, [34] L. Lin, D. Wu, J. Wang, Y. Chen, X. Zhang, H. Wu, Automatic, robust and blind
|
||
http://dx.doi.org/10.1007/s11042-022-12115-8. video watermarking resisting camera recording, IEEE Trans. Circuits Syst. Video
|
||
[11] C. Chang, C. Chuang, An image intellectual property protection scheme for gray- Technol. (2024) http://dx.doi.org/10.1109/TCSVT.2024.3448502.
|
||
level images using visual secret sharing strategy, Pattern Recognit. Lett. 23 (8) [35] J. Gao, Z. Li, B. Fan, An efficient robust zero watermarking scheme for diffusion
|
||
(2002) 931–941, http://dx.doi.org/10.1016/S0167-8655(02)00023-5. tensor-Magnetic resonance imaging high-dimensional data, J. Inf. Secur. Appl.
|
||
[12] C. Chang, Y. Lin, Adaptive watermark mechanism for rightful ownership protec- 65 (2022) 103106, http://dx.doi.org/10.1016/j.jisa.2021.103106.
|
||
tion, J. Syst. Softw. 81 (7) (2008) 1118–1129, http://dx.doi.org/10.1016/j.jss.
|
||
[36] X. Chang, B. Chen, W. Ding, X. Liao, A DNN robust video watermarking method
|
||
2007.07.036.
|
||
in dual-tree complex wavelet transform domain, J. Inf. Secur. Appl. 85 (2024)
|
||
[13] B. Zou, J. Du, X. Liu, Y. Wang, Distinguishable zero-watermarking scheme
|
||
103868, http://dx.doi.org/10.1016/j.jisa.2024.103868.
|
||
with similarity-based retrieval for digital rights Management of Fundus Image,
|
||
[37] J. Wang, W. Yu, J. Wang, Y. Zhao, J. Zhang, D. Jiang, A new six-dimensional
|
||
Multimedia Tools Appl. 77 (2018) 28685–28708, http://dx.doi.org/10.1007/
|
||
hyperchaotic system and its secure communication circuit implementation, Int.
|
||
s11042-018-5995-4.
|
||
J. Circuit Theory Appl. 47 (5) (2019) 702–717, http://dx.doi.org/10.1002/cta.
|
||
[14] C. Wang, D. Qian, D. Ling, Z. Hua, H. Jian, Robust zero-watermarking algorithm
|
||
2617.
|
||
via multi-scale feature analysis for medical images, J. Inf. Secur. Appl. 89 (2025)
|
||
[38] T. Zhou, J. Shen, X. Li, C. Wang, H. Tan, Logarithmic encryption scheme for
|
||
103937, http://dx.doi.org/10.1016/j.jisa.2024.103937.
|
||
cyber–physical systems employing Fibonacci Q-matrix, Future Gener. Comput.
|
||
[15] N. Ren, Y. Hu, C. Zhu, S. Guo, X. Zhu, Moment invariants based zero water-
|
||
Syst. 108 (2020) 1307–1313, http://dx.doi.org/10.1016/j.future.2018.04.008.
|
||
marking algorithm for trajectory data, J. Inf. Secur. Appl. 86 (2024) 103867,
|
||
http://dx.doi.org/10.1016/j.jisa.2024.103867. [39] P. Dong, G. Brankov, P. Galatsanos, Y. Yong, F. Davoine, Digital watermarking
|
||
[16] M. Yang, B. Li, A. Bhatti, Y. Shao, W. Chen, Robust watermarking algorithm robust to geometric distortions, IEEE Trans. Image Process. 14 (12) (2005)
|
||
for medical images based on non-subsampled Shearlet transform and Schur 2140–2150, http://dx.doi.org/10.1109/TIP.2005.857263.
|
||
decomposition, Comput. Mater. Contin. 75 (3) http://dx.doi.org/10.32604/cmc. [40] H. Song, S. Yu, X. Yang, L. Song, C. Wang, Contourlet-based image adaptive
|
||
2023.036904. watermarking, Signal Process., Image Commun. 23 (3) (2008) 162–178, http:
|
||
[17] T. Huang, J. Xu, Y. Yang, B. Han, Robust zero-watermarking algorithm for //dx.doi.org/10.1016/j.image.2008.01.005.
|
||
medical images using double-tree complex wavelet transform and Hessenberg [41] Z. Ping, R. Wu, Y. Sheng, Image description with Chebyshev–Fourier moments,
|
||
decomposition, Mathematics 10 (7) (2022) 1154, http://dx.doi.org/10.3390/ J. Opt. Soc. Amer. A 19 (9) (2002) 1748–1754, http://dx.doi.org/10.1364/josaa.
|
||
math10071154. 19.001748.
|
||
[18] Y. Lu, H. Lu, Y. Yang, G. Xiong, Robust zero-watermarking algorithm for multi- [42] Z. Ping, H. Ren, J. Zou, Y. Sheng, W. Bo, Generic orthogonal moments: Jacobi–
|
||
medical images based on FFST-Schur and Tent mapping, Biomed. Signal Process. Fourier moments for invariant image description, Pattern Recognit. 40 (4) (2007)
|
||
Control. 96 (2024) 106557, http://dx.doi.org/10.1016/j.bspc.2024.106557. 1245–1254, http://dx.doi.org/10.1016/j.patcog.2006.07.016.
|
||
|
||
|
||
16
|
||
X. Lu et al. Computer Standards & Interfaces 97 (2026) 104115
|
||
|
||
|
||
[43] R. Jain, M. Kumar, K. Jain, M. Jain, Digital Image Watermarking using Hybrid [44] BossBase image database. https://www.kaggle.com/datasets/lijiyu/bossbase.
|
||
DWT-FFT technique with different attacks, in: 2015 International Conference [45] BOW-2 image database. https://data.mendeley.com/datasets/kb3ngxfmjw/1.
|
||
on Communications and Signal Processing, ICCSP, 2015, pp. 0672–0675, http: [46] COVID image database. https://github.com/ieee8023/covid-chestxray-dataset.
|
||
//dx.doi.org/10.1109/ICCSP.2015.7322574. [47] SIPI image database. http://sipi.usc.edu/database/.
|
||
|
||
|
||
|
||
|
||
17
|
||
|