Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

libopenvino_ir_frontend2440-2024.4.0-1.1 RPM for riscv64

From OpenSuSE Ports Tumbleweed for riscv64

Name: libopenvino_ir_frontend2440 Distribution: openSUSE Tumbleweed
Version: 2024.4.0 Vendor: openSUSE
Release: 1.1 Build date: Tue Oct 15 02:56:54 2024
Group: Unspecified Build host: reproducible
Size: 343947 Source RPM: openvino-2024.4.0-1.1.src.rpm
Packager: https://bugs.opensuse.org
Url: https://github.com/openvinotoolkit/openvino
Summary: Paddle frontend for Intel OpenVINO toolkit
OpenVINO is an open-source toolkit for optimizing and deploying AI inference.

This package provides the ir frontend for OpenVINO.

Provides

Requires

License

Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND HPND AND JSON AND MIT AND OFL-1.1 AND Zlib

Changelog

* Tue Oct 15 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Temporarily inserted gcc-13 in Tumbleweed/Factory/Slowroll:
    Because there is an incompatibility of the source code of the
    level-zero library and npu module with gcc-14. I am working
    with Intel on tests to return to native gcc.
  - Update to 2024.4.0
  - Summary of major features and improvements  
    * More Gen AI coverage and framework integrations to minimize
      code changes
      + Support for GLM-4-9B Chat, MiniCPM-1B, Llama 3 and 3.1,
      Phi-3-Mini, Phi-3-Medium and YOLOX-s models.
      + Noteworthy notebooks added: Florence-2, NuExtract-tiny
      Structure Extraction, Flux.1 Image Generation, PixArt-α:
      Photorealistic Text-to-Image Synthesis, and Phi-3-Vision
      Visual Language Assistant.
    * Broader Large Language Model (LLM) support and more model
      compression techniques.
      + OpenVINO™ runtime optimized for Intel® Xe Matrix Extensions
      (Intel® XMX) systolic arrays on built-in GPUs for efficient
      matrix multiplication resulting in significant LLM
      performance boost with improved 1st and 2nd token
      latency, as well as a smaller memory footprint on
      Intel® Core™ Ultra Processors (Series 2).
      + Memory sharing enabled for NPUs on Intel® Core™ Ultra
      Processors (Series 2) for efficient pipeline integration
      without memory copy overhead.
      + Addition of the PagedAttention feature for discrete GPUs*
      enables a significant boost in throughput for parallel
      inferencing when serving LLMs on Intel® Arc™ Graphics
      or Intel® Data Center GPU Flex Series.
    * More portability and performance to run AI at the edge,
      in the cloud, or locally.
      + OpenVINO™ Model Server now comes with production-quality
      support for OpenAI-compatible API which enables i
      significantly higher throughput for parallel inferencing
      on Intel® Xeon® processors when serving LLMs to many
      concurrent users.
      + Improved performance and memory consumption with prefix
      caching, KV cache compression, and other optimizations
      for serving LLMs using OpenVINO™ Model Server.
      + Support for Python 3.12.
  - Support Change and Deprecation Notices
    * Using deprecated features and components is not advised.
      They are available to enable a smooth transition to new
      solutions and will be discontinued in the future.
      To keep using discontinued features, you will have to
      revert to the last LTS OpenVINO version supporting them.
      For more details, refer to the OpenVINO Legacy Features
      and Components page.
    * Discontinued in 2024.0:
      + Runtime components:
    - Intel® Gaussian & Neural Accelerator (Intel® GNA).
      Consider using the Neural Processing Unit (NPU) for
      low-powered systems like Intel® Core™ Ultra or
      14th generation and beyond.
    - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
      transition guide for reference).
    - All ONNX Frontend legacy API (known as
      ONNX_IMPORTER_API)
    - 'PerfomanceMode.UNDEFINED' property as part of the
      OpenVINO Python API
      + Tools:
    - Deployment Manager. See installation and deployment
      guides for current distribution options.
    - Accuracy Checker.
    - Post-Training Optimization Tool (POT). Neural Network
      Compression Framework (NNCF) should be used instead.
    - A Git patch for NNCF integration with huggingface/
      transformers. The recommended approach is to use
      huggingface/optimum-intel for applying NNCF
      optimization on top of models from Hugging Face.
    - Support for Apache MXNet, Caffe, and Kaldi model
      formats. Conversion to ONNX may be used as a
      solution.
    * Deprecated and to be removed in the future:
      + The macOS x86_64 debug bins will no longer be
      provided with the OpenVINO toolkit, starting with
      OpenVINO 2024.5.
      + Python 3.8 is now considered deprecated, and it will not
      be available beyond the 2024.4 OpenVINO version.
      + dKMB support is now considered deprecated and will be
      fully removed with OpenVINO 2024.5
      + Intel® Streaming SIMD Extensions (Intel® SSE) will be
      supported in source code form, but not enabled in the
      binary package by default, starting with OpenVINO 2025.0
      + The openvino-nightly PyPI module will soon be discontinued.
      End-users should proceed with the Simple PyPI nightly repo
      instead. More information in Release Policy.
      + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options and
      distribution channels beginning with OpenVINO 2025.0.
      + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For more
      details, see the model conversion transition guide.
      + OpenVINO property Affinity API will be discontinued with
      OpenVINO 2025.0. It will be replaced with CPU binding
      configurations (ov::hint::enable_cpu_pinning).
      + OpenVINO Model Server components:
    - “auto shape” and “auto batch size” (reshaping a model in
      runtime) will be removed in the future. OpenVINO’s dynamic
      shape models are recommended instead.
      + A number of notebooks have been deprecated. For an
      up-to-date listing of available notebooks, refer to the
      OpenVINO™ Notebook index (openvinotoolkit.github.io).
* Wed Oct 02 2024 Giacomo Comes <gcomes.obs@gmail.com>
  - Add Leap15 build
  - Remove comment lines in the spec file that cause the insertion
    of extra lines during a commit
* Sat Aug 10 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Remove NPU Compile Tool
    * openvino-remove-npu-compile-tool.patch
  - Update to 2024.3.0
  - Summary of major features and improvements  
    * More Gen AI coverage and framework integrations to minimize
      code changes
      + OpenVINO pre-optimized models are now available in Hugging
      Face making it easier for developers to get started with
      these models.
    * Broader Large Language Model (LLM) support and more model
      compression techniques.
      + Significant improvement in LLM performance on Intel
      discrete GPUs with the addition of Multi-Head Attention
      (MHA) and OneDNN enhancements.
    * More portability and performance to run AI at the edge, in the
      cloud, or locally.
      + Improved CPU performance when serving LLMs with the
      inclusion of vLLM and continuous batching in the OpenVINO
      Model Server (OVMS). vLLM is an easy-to-use open-source
      library that supports efficient LLM inferencing and model
      serving.
  - Support Change and Deprecation Notices
    * Using deprecated features and components is not advised.
      They are available to enable a smooth transition to new
      solutions and will be discontinued in the future. To keep
      using discontinued features, you will have to revert to the
      last LTS OpenVINO version supporting them. For more details,
      refer to the OpenVINO Legacy Features and Components page.
    * Discontinued in 2024.0:
      + Runtime components:
    - Intel® Gaussian & Neural Accelerator (Intel® GNA)..Consider
      using the Neural Processing Unit (NPU) for low-powered
      systems like Intel® Core™ Ultra or 14th generation
      and beyond.
    - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition
      guide for reference).
    - All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
    - 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO
      Python API
      + Tools:
    - Deployment Manager. See installation and deployment guides
      for current distribution options.
    - Accuracy Checker.
    - Post-Training Optimization Tool (POT). Neural Network
      Compression Framework (NNCF) should be used instead.
    - A Git patch for NNCF integration with huggingface/
      transformers. The recommended approach is to use
      huggingface/optimum-intel for applying NNCF optimization
      on top of models from Hugging Face.
    - Support for Apache MXNet, Caffe, and Kaldi model formats.
      Conversion to ONNX may be used as a solution.
    * Deprecated and to be removed in the future:
      + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options
      and distribution channels beginning with OpenVINO 2025.0.
      + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For
      more details, see the model conversion transition guide.
      + OpenVINO property Affinity API will be discontinued with
      OpenVINO 2025.0. It will be replaced with CPU binding
      configurations (ov::hint::enable_cpu_pinning).
      + OpenVINO Model Server components:
    - “auto shape” and “auto batch size” (reshaping a model
      in runtime) will be removed in the future. OpenVINO’s
      dynamic shape models are recommended instead.
      + A number of notebooks have been deprecated. For an
      up-to-date listing of available notebooks, refer to
      the OpenVINO™ Notebook index (openvinotoolkit.github.io).
* Sat Jun 22 2024 Andreas Schwab <schwab@suse.de>
  - Add riscv-cpu-plugin subpackage
* Wed Jun 19 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Update to 2024.2.0
  - More Gen AI coverage and framework integrations to minimize code
    changes
    * Llama 3 optimizations for CPUs, built-in GPUs, and discrete
      GPUs for improved performance and efficient memory usage.
    * Support for Phi-3-mini, a family of AI models that leverages
      the power of small language models for faster, more accurate
      and cost-effective text processing.
    * Python Custom Operation is now enabled in OpenVINO making it
      easier for Python developers to code their custom operations
      instead of using C++ custom operations (also supported).
      Python Custom Operation empowers users to implement their own
      specialized operations into any model.
    * Notebooks expansion to ensure better coverage for new models.
      Noteworthy notebooks added: DynamiCrafter, YOLOv10, Chatbot
      notebook with Phi-3, and QWEN2.
  - Broader Large Language Model (LLM) support and more model
    compression techniques.
    * GPTQ method for 4-bit weight compression added to NNCF for
      more efficient inference and improved performance of
      compressed LLMs.
    * Significant LLM performance improvements and reduced latency
      for both built-in GPUs and discrete GPUs.
    * Significant improvement in 2nd token latency and memory
      footprint of FP16 weight LLMs on AVX2 (13th Gen Intel® Core™
      processors) and AVX512 (3rd Gen Intel® Xeon® Scalable
      Processors) based CPU platforms, particularly for small
      batch sizes.
  - More portability and performance to run AI at the edge, in the
    cloud, or locally.
    * Model Serving Enhancements:
    * Preview: OpenVINO Model Server (OVMS) now supports
      OpenAI-compatible API along with Continuous Batching and
      PagedAttention, enabling significantly higher throughput
      for parallel inferencing, especially on Intel® Xeon®
      processors, when serving LLMs to many concurrent users.
    * OpenVINO backend for Triton Server now supports built-in
      GPUs and discrete GPUs, in addition to dynamic
      shapes support.
    * Integration of TorchServe through torch.compile OpenVINO
      backend for easy model deployment, provisioning to
      multiple instances, model versioning, and maintenance.
    * Preview: addition of the Generate API, a simplified API
      for text generation using large language models with only
      a few lines of code. The API is available through the newly
      launched OpenVINO GenAI package.
    * Support for Intel Atom® Processor X Series. For more details,
      see System Requirements.
    * Preview: Support for Intel® Xeon® 6 processor.
  - Support Change and Deprecation Notices
    * Using deprecated features and components is not advised.
      They are available to enable a smooth transition to new
      solutions and will be discontinued in the future.
      To keep using discontinued features, you will have to revert
      to the last LTS OpenVINO version supporting them. For more
      details, refer to the OpenVINO Legacy Features and
      Components page.
    * Discontinued in 2024.0:
      + Runtime components:
    - Intel® Gaussian & Neural Accelerator (Intel® GNA).
      Consider using the Neural Processing Unit (NPU) for
      low-powered systems like Intel® Core™ Ultra or 14th
      generation and beyond.
    - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
      transition guide for reference).
    - All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
    - 'PerfomanceMode.UNDEFINED' property as part of the
      OpenVINO Python API
      + Tools:
    - Deployment Manager. See installation and deployment
      guides for current distribution options.
    - Accuracy Checker.
    - Post-Training Optimization Tool (POT). Neural Network
      Compression Framework (NNCF) should be used instead.
    - A Git patch for NNCF integration with 
      huggingface/transformers. The recommended approach 
      is to use huggingface/optimum-intel for applying NNCF
      optimization on top of models from Hugging Face.
    - Support for Apache MXNet, Caffe, and Kaldi model formats.
      Conversion to ONNX may be used as a solution.
    * Deprecated and to be removed in the future:
      + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options
      and distribution channels beginning with OpenVINO 2025.0.
      + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For
      more details, see the model conversion transition guide.
      + OpenVINO property Affinity API will be discontinued with
      OpenVINO 2025.0. It will be replaced with CPU binding
      configurations (ov::hint::enable_cpu_pinning).
      + OpenVINO Model Server components:
      + “auto shape” and “auto batch size” (reshaping a model in
      runtime) will be removed in the future. OpenVINO’s dynamic
      shape models are recommended instead.
      + A number of notebooks have been deprecated. For an
      up-to-date listing of available notebooks, refer to the
      OpenVINO™ Notebook index (openvinotoolkit.github.io).
* Thu May 09 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Fix sample source path in build script:
    * openvino-fix-build-sample-path.patch
  - Update to 2024.1.0
  - More Generative AI coverage and framework integrations to
    minimize code changes.
    * Mixtral and URLNet models optimized for performance
      improvements on Intel® Xeon® processors.
    * Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
      optimized for improved inference speed on Intel® Core™
      Ultra processors with integrated GPU.
    * Support for Falcon-7B-Instruct, a GenAI Large Language Model
      (LLM) ready-to-use chat/instruct model with superior
      performance metrics.
    * New Jupyter Notebooks added: YOLO V9, YOLO V8
      Oriented Bounding Boxes Detection (OOB), Stable Diffusion
      in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
      TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
      OpenVINO and LangChain.
  - Broader Large Language Model (LLM) support and more model
    compression techniques.
    * LLM compilation time reduced through additional optimizations
      with compressed embedding. Improved 1st token performance of
      LLMs on 4th and 5th generations of Intel® Xeon® processors
      with Intel® Advanced Matrix Extensions (Intel® AMX).
    * Better LLM compression and improved performance with oneDNN,
      INT4, and INT8 support for Intel® Arc™ GPUs.
    * Significant memory reduction for select smaller GenAI
      models on Intel® Core™ Ultra processors with integrated GPU.
  - More portability and performance to run AI at the edge,
    in the cloud, or locally.
    * The preview NPU plugin for Intel® Core™ Ultra processors
      is now available in the OpenVINO open-source GitHub
      repository, in addition to the main OpenVINO package on PyPI.
    * The JavaScript API is now more easily accessible through
      the npm repository, enabling JavaScript developers’ seamless
      access to the OpenVINO API.
    * FP16 inference on ARM processors now enabled for the
      Convolutional Neural Network (CNN) by default.
  - Support Change and Deprecation Notices
    * Using deprecated features and components is not advised. They
      are available to enable a smooth transition to new solutions
      and will be discontinued in the future. To keep using
      Discontinued features, you will have to revert to the last
      LTS OpenVINO version supporting them.
    * For more details, refer to the OpenVINO Legacy Features
      and Components page.
    * Discontinued in 2024.0:
      + Runtime components:
    - Intel® Gaussian & Neural Accelerator (Intel® GNA).
      Consider using the Neural Processing Unit (NPU)
      for low-powered systems like Intel® Core™ Ultra or
      14th generation and beyond.
    - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
      transition guide for reference).
    - All ONNX Frontend legacy API (known as
      ONNX_IMPORTER_API)
    - 'PerfomanceMode.UNDEFINED' property as part of
      the OpenVINO Python API
      + Tools:
    - Deployment Manager. See installation and deployment
      guides for current distribution options.
    - Accuracy Checker.
    - Post-Training Optimization Tool (POT). Neural Network
      Compression Framework (NNCF) should be used instead.
    - A Git patch for NNCF integration with 
      huggingface/transformers. The recommended approach
       is to use huggingface/optimum-intel for applying
      NNCF optimization on top of models from Hugging
      Face.
    - Support for Apache MXNet, Caffe, and Kaldi model
      formats. Conversion to ONNX may be used as
      a solution.
    * Deprecated and to be removed in the future:
      + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options
      and distribution channels beginning with OpenVINO 2025.0.
      + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For
      more details, see the model conversion transition guide.
      + OpenVINO property Affinity API will be discontinued with
      OpenVINO 2025.0. It will be replaced with CPU binding
      configurations (ov::hint::enable_cpu_pinning).
      + OpenVINO Model Server components:
    - “auto shape” and “auto batch size” (reshaping a model
      in runtime) will be removed in the future. OpenVINO’s
      dynamic shape models are recommended instead.
* Tue Apr 23 2024 Atri Bhattacharya <badshah400@gmail.com>
  - License update: play safe and list all third party licenses as
    part of the License tag.
* Tue Apr 23 2024 Atri Bhattacharya <badshah400@gmail.com>
  - Switch to _service file as tagged Source tarball does not
    include `./thirdparty` submodules.
  - Update openvino-fix-install-paths.patch to fix python module
    install path.
  - Enable python module and split it out into a python subpackage
    (for now default python3 only).
  - Explicitly build python metadata (dist-info) and install it
    (needs simple sed hackery to support "officially" unsupported
    platform ppc64le).
  - Specify ENABLE_JS=OFF to turn off javascript bindings as
    building these requires downloading npm stuff from the network.
  - Build with system pybind11.
  - Bump _constraints for updated disk space requirements.
  - Drop empty %check section, rpmlint was misleading when it
    recommended adding this.
* Fri Apr 19 2024 Atri Bhattacharya <badshah400@gmail.com>
  - Numerous specfile cleanups:
    * Drop redundant `mv` commands and use `install` where
      appropriate.
    * Build with system protobuf.
    * Fix Summary tags.
    * Trim package descriptions.
    * Drop forcing CMAKE_BUILD_TYPE=Release, let macro default
      RelWithDebInfo be used instead.
    * Correct naming of shared library packages.
    * Separate out libopenvino_c.so.* into own shared lib package.
    * Drop rpmlintrc rule used to hide shlib naming mistakes.
    * Rename Source tarball to %{name}-%{version}.EXT pattern.
    * Use ldconfig_scriptlet macro for post(un).
  - Add openvino-onnx-ml-defines.patch -- Define ONNX_ML at compile
    time when using system onnx to allow using 'onnx-ml.pb.h'
    instead of 'onnx.pb.h', the latter not being shipped with
    openSUSE's onnx-devel package (gh#onnx/onnx#3074).
  - Add openvino-fix-install-paths.patch: Change hard-coded install
    paths in upstream cmake macro to standard Linux dirs.
  - Add openvino-ComputeLibrary-include-string.patch: Include header
    for std::string.
  - Add external devel packages as Requires for openvino-devel.
  - Pass -Wl,-z,noexecstack to %build_ldflags to avoid an exec stack
    issue with intel CPU plugin.
  - Use ninja for build.
  - Adapt _constraits file for correct disk space and memory
    requirements.
  - Add empty %check section.
* Mon Apr 15 2024 Alessandro de Oliveira Faria <cabelo@opensuse.org>
  - Initial package
  - Version 2024.0.0
  - Add openvino-rpmlintrc.

Files

/usr/lib64/libopenvino_ir_frontend.so.2024.4.0
/usr/lib64/libopenvino_ir_frontend.so.2440


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Nov 19 01:14:14 2024