<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://repository.hneu.edu.ua/handle/123456789/178">
    <title>DSpace Зібрання:</title>
    <link>https://repository.hneu.edu.ua/handle/123456789/178</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://repository.hneu.edu.ua/handle/123456789/39145" />
        <rdf:li rdf:resource="https://repository.hneu.edu.ua/handle/123456789/39144" />
        <rdf:li rdf:resource="https://repository.hneu.edu.ua/handle/123456789/39088" />
        <rdf:li rdf:resource="https://repository.hneu.edu.ua/handle/123456789/39087" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-05T16:56:29Z</dc:date>
  </channel>
  <item rdf:about="https://repository.hneu.edu.ua/handle/123456789/39145">
    <title>Development and research of multimodal neural architectures for heterogeneous unbalanced data in classification tasks</title>
    <link>https://repository.hneu.edu.ua/handle/123456789/39145</link>
    <description>Назва: Development and research of multimodal neural architectures for heterogeneous unbalanced data in classification tasks
Автори: Minukhin S.; Rudoi V.
Короткий огляд (реферат): The article presents a comprehensive study of modern multimodal neural architectures for integrating heterogeneous and partially unbalanced data in classification tasks. It considers early and late fusion approaches, hybrid architectures with cross-modal attention, and transformers that allow the formation of consistent latent spaces of visual, auditory, and textual features. Particular attention is paid to contrastive learning (CLIP-like approaches, multimodal InfoNCE), which ensures semantic consistency of representations and improves classification accuracy in the presence of uneven data distribution and rare classes. A model is proposed that combines early and late fusion with cross-modal attention and contrastive learning to form a coherent joint latent space. Features of each modality are processed by specialized encoders, and fusion is performed with adaptive weighting, which minimizes the impact of heterogeneous data imbalance and enables the efficient processing of signals of different natures and intensities. The use of pruning, quantization, and knowledge distillation has reduced computational costs without losing accuracy, ensuring stable model performance in real-world streaming scenarios with limited resources. The results of applying the proposed model to the BDD100K and CMU-MOSEI datasets confirmed the model's high efficiency in processing heterogeneous and unbalanced data. For BDD100K, Accuracy 0.953, F1-score 0.956, ROC-AUC 0.947 were achieved, and the integral indicators Micro F1, Macro F1, and Weighted F1 were 0.953, 0.949, and 0.955, respectively; For CMU-MOSEI, Accuracy 0.956, F1-score 0.969, ROC-AUC 0.968, and the integral indicators Micro F1, Macro F1, and Weighted F1 were 0.956, 0.962, and 0.968, respectively. A comparative analysis with classical feature concatenation approaches, recent State-of-the-Art multimodal fusion models, and AutoML-based solutions demonstrated that the proposed architecture consistently outperforms existing methods. In particular, the model improves classification accuracy by approximately 2–4% compared to recent SOTA architectures and provides more stable F1-scores for minority classes. A comparison with the AutoML-based framework B-T4SA also confirms the robustness of the proposed approach. These results demonstrate that the developed model ensures higher classification consistency for both frequent and rare classes under heterogeneous and imbalanced data conditions.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://repository.hneu.edu.ua/handle/123456789/39144">
    <title>A hybrid approach to visually oriented generation of culinary recipes based on convolutional neural networks and large language models</title>
    <link>https://repository.hneu.edu.ua/handle/123456789/39144</link>
    <description>Назва: A hybrid approach to visually oriented generation of culinary recipes based on convolutional neural networks and large language models
Автори: Minukhin S.; Shaposhnyk M.
Короткий огляд (реферат): This article delineates a hybrid approach for visually anchored recipe synthesis, orchestrating a confluence of computer vision and natural&#xD;
language processing. By integrating multi-label Convolutional Neural Networks with Large Language Models, the architecture remediates the&#xD;
inherent opacity found when mapping pixel-level abstractions onto culinary discourse. To rectify the resolution divergence between monolithic dish&#xD;
categorization and granular ingredient composition, this research prioritizes semantic fidelity. The investigative trajectory involved diagnosing the&#xD;
constraints of orthodox single-label classification and subsequently re-engineering the DenseNet-121 topology to accommodate concurrent streams&#xD;
for ingredient identification. Grounded in transfer learning, the ocular engine—trained on the Food-101 corpus—utilizes cost-sensitive optimization&#xD;
to sharpen detection accuracy. Linguistic synthesis proceeds via the Llama 3.1 8B model, instrumented through In-Context Learning and validated&#xD;
through BLEU, ROUGE, and Cosine Similarity benchmarks. Empirical evidence underscores the framework's efficacy; the refined detector yielded&#xD;
a Recall of 0.91. Insofar as visual context was integrated into structured prompts, the mean Cosine Similarity ascended to 0.765, marking a significant&#xD;
leap in capturing nuanced dish variations compared to established baselines. The proposed hybrid approach successfully bridges the semantic gap&#xD;
between visual data and textual generation. Explicitly injecting detected ingredients into the LLM context enables the creation of instance-specific&#xD;
recipes rather than template-based outputs, significantly mitigating AI hallucinations and increasing the relevance of the results.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://repository.hneu.edu.ua/handle/123456789/39088">
    <title>Information platform for automating the processes of accounting for damaged citizens’ property</title>
    <link>https://repository.hneu.edu.ua/handle/123456789/39088</link>
    <description>Назва: Information platform for automating the processes of accounting for damaged citizens’ property
Автори: Tokariev V.; Skrynnyk K.
Короткий огляд (реферат): The research explores the digital transformation of compensation processes through the e-Recovery system, focusing on automating the accounting of property damaged by military actions.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://repository.hneu.edu.ua/handle/123456789/39087">
    <title>Іnformation system for parking occupancy monitoring</title>
    <link>https://repository.hneu.edu.ua/handle/123456789/39087</link>
    <description>Назва: Іnformation system for parking occupancy monitoring
Автори: Tokariev V.; Kuznetsov B.
Короткий огляд (реферат): The study addresses the inefficiencies of traditional parking management by proposing an automated IT service for real-time occupancy monitoring and statistical reporting.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

