- Classification of motor faults based on transmission coefficient and reflection coefficient of omni-directional antenna using DCNN The most commonly used electrical rotary machines in the field are induction machines. In this paper, we propose an antenna based approach for the classification of motor faults in induction motors using the reflection coefficient S11 and the transmission coefficient S21 of the antenna. The spectrograms of S11 and S21 are seen to possess unique signatures for various fault conditions that are used for the classification. To learn the required characteristics and classification boundaries, deep convolution neural network (DCNN) is applied to the spectrogram of the S-parameter. DCNN has been found to reach classification accuracy 93% using S11, 98.1% using S21 and 100% using both S11 and S21. The effect of antenna operating frequency, its location and duration of signal on the classification accuracy is also presented and discussed. 3 authors · Nov 3, 2025
- DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System A key challenge in eXplainable Artificial Intelligence is the well-known tradeoff between the transparency of an algorithm (i.e., how easily a human can directly understand the algorithm, as opposed to receiving a post-hoc explanation), and its accuracy. We report on the design of a new deep network that achieves improved transparency without sacrificing accuracy. We design a deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy logic and deep learning models and show that DCNFIS performs as accurately as existing convolutional neural networks on four well-known datasets and 3 famous architectures. Our performance comparison with available fuzzy methods show that DCNFIS is now state-of-the-art fuzzy system and outperforms other shallow and deep fuzzy methods to the best of our knowledge. At the end, we exploit the transparency of fuzzy logic by deriving explanations, in the form of saliency maps, from the fuzzy rules encoded in the network to take benefit of fuzzy logic upon regular deep learning methods. We investigate the properties of these explanations in greater depth using the Fashion-MNIST dataset. 6 authors · Aug 11, 2023
- CDNet is all you need: Cascade DCN based underwater object detection RCNN Object detection is a very important basic research direction in the field of computer vision and a basic method for other advanced tasks in the field of computer vision. It has been widely used in practical applications such as object tracking, video behavior recognition and underwater robotics vision. The Cascade-RCNN and Deformable Convolution Network are both classical and excellent object detection algorithms. In this report, we evaluate our Cascade-DCN based method on underwater optical image and acoustics image datasets with different engineering tricks and augumentation. 1 authors · Nov 25, 2021
3 FlowDCN: Exploring DCN-like Architectures for Fast Image Generation with Arbitrary Resolution Arbitrary-resolution image generation still remains a challenging task in AIGC, as it requires handling varying resolutions and aspect ratios while maintaining high visual quality. Existing transformer-based diffusion methods suffer from quadratic computation cost and limited resolution extrapolation capabilities, making them less effective for this task. In this paper, we propose FlowDCN, a purely convolution-based generative model with linear time and memory complexity, that can efficiently generate high-quality images at arbitrary resolutions. Equipped with a new design of learnable group-wise deformable convolution block, our FlowDCN yields higher flexibility and capability to handle different resolutions with a single model. FlowDCN achieves the state-of-the-art 4.30 sFID on 256times256 ImageNet Benchmark and comparable resolution extrapolation results, surpassing transformer-based counterparts in terms of convergence speed (only 1{5} images), visual quality, parameters (8% reduction) and FLOPs (20% reduction). We believe FlowDCN offers a promising solution to scalable and flexible image synthesis. 7 authors · Oct 29, 2024
- Noisy Softmax: Improving the Generalization Ability of DCNN via Postponing the Early Softmax Saturation Over the past few years, softmax and SGD have become a commonly used component and the default training strategy in CNN frameworks, respectively. However, when optimizing CNNs with SGD, the saturation behavior behind softmax always gives us an illusion of training well and then is omitted. In this paper, we first emphasize that the early saturation behavior of softmax will impede the exploration of SGD, which sometimes is a reason for model converging at a bad local-minima, then propose Noisy Softmax to mitigating this early saturation issue by injecting annealed noise in softmax during each iteration. This operation based on noise injection aims at postponing the early saturation and further bringing continuous gradients propagation so as to significantly encourage SGD solver to be more exploratory and help to find a better local-minima. This paper empirically verifies the superiority of the early softmax desaturation, and our method indeed improves the generalization ability of CNN model by regularization. We experimentally find that this early desaturation helps optimization in many tasks, yielding state-of-the-art or competitive results on several popular benchmark datasets. 3 authors · Aug 12, 2017