Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • OSUVA
  • Artikkelit
  • Näytä aineisto
  •   Etusivu
  • OSUVA
  • Artikkelit
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

Benchmarking Q-learning methods for intelligent network orchestration in the edge

Reijonen, Joel; Opsenica, Miljenko; Kauppinen, Tero; Komu, Miika; Kjällman, Jimmy; Mecklin, Tomas; Hiltunen, Eero; Arkko, Jari; Simanainen, Timo; Elmusrati, Mohammed (2020-05-13)

 
Katso/Avaa
article (3.144Mb)
Lataukset: 

URI
https://doi.org/10.1109/6GSUMMIT49458.2020.9083745

Reijonen, Joel
Opsenica, Miljenko
Kauppinen, Tero
Komu, Miika
Kjällman, Jimmy
Mecklin, Tomas
Hiltunen, Eero
Arkko, Jari
Simanainen, Timo
Elmusrati, Mohammed
IEEE
13.05.2020
doi:10.1109/6GSUMMIT49458.2020.9083745
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2020060942391

Kuvaus

vertaisarvioitu
©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Tiivistelmä
We benchmark Q-learning methods, with various action selection strategies, in intelligent orchestration of the network edge. Q-learning is a reinforcement learning technique that aims to find optimal action policies by taking advantage of the experiences in the past without utilizing a model that describes the dynamics of the environment. With experiences, we refer to the observed causality between the action and the corresponding impact to the environment. In this paper, the environment for Q-learning is composed of virtualized networking resources along with their dynamics that are monitored with Spindump, an in-network latency measurement tool with support for QUIC and TCP. We optimize the orchestration of these networking resources by introducing Q-learning as part of the machine learning driven, intelligent orchestration that is applicable in the edge. Based on the benchmarking results, we identify which action selection strategies support network orchestration that provides low latency and packet loss by considering network resource allocation in the edge.
Kokoelmat
  • Artikkelit [1923]
https://osuva.uwasa.fi
Ota yhteyttä | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

TekijäNimekeAsiasanaYksikkö / TiedekuntaOppiaineJulkaisuaikaKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
https://osuva.uwasa.fi
Ota yhteyttä | Tietosuoja | Saavutettavuusseloste