Logotipo del repositorio
  • English
  • Español
  • Euskara
  • Iniciar sesión
    ¿Nuevo usuario? Regístrese aquí¿Ha olvidado su contraseña?
Logotipo del repositorio
  • Repositorio Institucional
  • Comunidades
  • Todo DSpace
  • Políticas
  • English
  • Español
  • Euskara
  • Iniciar sesión
    ¿Nuevo usuario? Regístrese aquí¿Ha olvidado su contraseña?
  1. Inicio
  2. Buscar por autor

Examinando por Autor "Terres Escudero, Erik B."

Mostrando 1 - 2 de 2
Resultados por página
Opciones de ordenación
  • Cargando...
    Miniatura
    Ítem
    Let's do it right the first time: survey on security concerns in the way to quantum software engineering
    (Elsevier B.V., 2023-06-14) Arias Alamo, Danel; García Rodríguez de Guzmán, Ignacio ; Rodríguez, Moises; Terres Escudero, Erik B.; Sanz Urquijo, Borja ; Gaviria de la Puerta, José ; Pastor López, Iker ; Zubillaga Rego, Agustín José ; García Bringas, Pablo
    Quantum computing is no longer a promise of the future but a rapidly evolving reality. Advances in quantum hardware are making it possible to make tangible a computational reality that until now was only theoretical. The proof of this is that development languages and platforms are appearing that bring physical principles closer to developers, making it feasible to begin to propose, in different areas of society, solutions to problems that until now were unsolvable. However, security vulnerabilities are also emerging that could hinder the progress of quantum computing, as well as its transition and development in industry. For this reason, this article proposes a review of some of the first artefacts that are emerging in the field of quantum computing. From this analysis, we begin to identify possible security issues that could become potential vulnerabilities in the quantum software of tomorrow. Likewise, and following the experience in classical software development, the testing technique is analysed as a possible candidate for improving security in quantum software development. Following the principles of Quantum Software Engineering, we are aware of the lack of tools, techniques and knowledge necessary to guarantee the development of quantum software in the immediate future. Therefore, this article aims to offer some first clues on what would be a roadmap to guarantee secure quantum software development.
  • Cargando...
    Miniatura
    Ítem
    On the improvement of generalization and stability of forward-only learning via neural polarization
    (IOS Press BV, 2024-10-16) Terres Escudero, Erik B.; Ser Lorente, Javier del; García Bringas, Pablo
    Forward-only learning algorithms have recently gained attention as alternatives to gradient backpropagation, replacing the backward step of this latter solver with an additional contrastive forward pass. Among these approaches, the so-called Forward-Forward Algorithm (FFA) has been shown to achieve competitive levels of performance in terms of generalization and complexity. Networks trained using FFA learn to contrastively maximize a layer-wise defined goodness score when presented with real data (denoted as positive samples) and to minimize it when processing synthetic data (corr. negative samples). However, this algorithm still faces weaknesses that negatively affect the model accuracy and training stability, primarily due to a gradient imbalance between positive and negative samples. To overcome this issue, in this work we propose a novel implementation of the FFA algorithm, denoted as Polar-FFA, which extends the original formulation by introducing a neural division (polarization) between positive and negative instances. Neurons in each of these groups aim to maximize their goodness when presented with their respective data type, thereby creating a symmetric gradient behavior. To empirically gauge the improved learning capabilities of our proposed Polar-FFA, we perform several systematic experiments using different activation and goodness functions over image classification datasets. Our results demonstrate that Polar-FFA outperforms FFA in terms of accuracy and convergence speed. Furthermore, its lower reliance on hyperparameters reduces the need for hyperparameter tuning to guarantee optimal generalization capabilities, thereby allowing for a broader range of neural network configurations.
  • Icono ubicación Avda. Universidades 24
    48007 Bilbao
  • Icono ubicación+34 944 139 000
  • ContactoContacto
Rights

Excepto si se señala otra cosa, la licencia del ítem se describe como:
Creative Commons Attribution-NonCommercial-NoDerivs 4.0 License

Software DSpace copyright © 2002-2025 LYRASIS

  • Configuración de cookies
  • Enviar sugerencias