EP3899800A1 - Processeur de traitement de donnees, procede et programme d'ordinateur correspondant - Google Patents
Processeur de traitement de donnees, procede et programme d'ordinateur correspondantInfo
- Publication number
- EP3899800A1 EP3899800A1 EP19813025.4A EP19813025A EP3899800A1 EP 3899800 A1 EP3899800 A1 EP 3899800A1 EP 19813025 A EP19813025 A EP 19813025A EP 3899800 A1 EP3899800 A1 EP 3899800A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- function
- activation
- calculation
- configurable
- neuron
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Definitions
- TITLE Data processing processor, method and computer program
- the invention relates to the materialization of neural networks. More particularly, the invention relates to the physical implementation of adaptable and configurable neural networks. More specifically still, the invention relates to the implementation of a generic neural network whose configuration and operation can be adapted as required.
- a neural network is a digital system whose design was originally inspired by the functioning of biological neurons.
- a neural network is more generally modeled in the form of a system comprising a processing algorithm and statistical data (notably comprising weights).
- the processing algorithm makes it possible to process input data, which is combined with statistical data to obtain output results.
- the processing algorithm consists in defining the calculations which are carried out on the input data in combination with the statistical data of the network to provide output results.
- computer neural networks are divided into layers. They generally have an entry layer, one or more intermediate layers and an exit layer.
- the general functioning of the computerized neural network, and therefore the general processing applied to the input data consists in implementing a process
- a neuron generally comprises on the one hand a combination function and an activation function.
- This combination function and this activation function are implemented in a computerized manner by the use of an algorithm associated with the neuron or with a set of neurons located in the same layer.
- the combine function is used to combine the input data with the statistical data (synaptic weights).
- the input data is materialized in the form of a vector, each point of the vector representing a given value.
- Statistical values i.e. synaptic weights
- the combination function is therefore formalized as being a vector-to-scalar function, as follows:
- a calculation of a linear combination of the inputs is carried out, that is to say that the combination function returns the scalar product between the vector of the inputs and the vector of the synaptic weights;
- the activation function for its part, is used to effect a break in linearity in the functioning of the neuron.
- the thresholding functions generally have three intervals below the threshold, the neuron is non-active (often in this case, its output is worth 0 or -1); around the threshold, a transition phase;
- the neuron is active (often in this case, its output is worth 1).
- the hardware implementation proposed in this document is however limited in terms of scope. Indeed, it is limited to the implementation of a convolutional neural network in which many reductions are made. However, it provides an implementation of fixed-point or floating-point calculations.
- the article “Implementation of Fixed-point Neuron Models with Threshold, Ramp and Sigmoid Activation Functions” by Lei Zhang (2017) also deals with the implementation of a neural network including the implementation of fixed point calculations for a particular neuron and three specific activation functions, unitarily implemented.
- the invention does not pose at least one of the problems of the prior art. More particularly, the invention relates to a data processing processor, said processor comprising at least one processing memory and a calculation unit, said processor being characterized in that the calculation unit comprises a set of units of configurable calculations called configurable neurons, each configurable neuron of the set of configurable neurons comprising a combination function calculation module and an activation function calculation module, each activation function calculation module comprising a register of reception of a configuration command, so that said command determines an activation function to be executed from at least two activation functions executable by the module for calculating activation functions.
- the invention makes it possible to configure, at execution, a set of reconfigurable neurons, so that they execute a predetermined function according to the command word supplied to the neurons during execution.
- the command word, received in a memory space, which can be dedicated, of the reconfigurable neuron can be different for each layer of a particular neural network, and thus be part of the parameters of the neural network to be executed (implemented) on the processor in question
- the at least two activation functions executable by the module for calculating activation functions belong to the group comprising:
- a reconfigurable neuron is able to implement the main activation functions used for industry.
- the module for calculating activation functions is configured to approximate said at least two activation functions.
- the computational capacity of the neural processor carrying a set of reconfigurable neurons can be reduced, resulting in a reduction in size, consumption and therefore the energy necessary for the implementation of the proposed technique compared to
- the module for calculating activation functions comprises a sub-module for calculating a basic operation corresponding to an approximation of the calculation of the sigmoid of the absolute value of ⁇ :
- the approximation of said at least two activation functions is performed as a function of an approximation parameter l.
- the approximation parameter l can thus be used, together with the control word, to define the behavior of the calculation unit of the basic operation for calculating a detailed approximation of the activation function of the control word.
- the command word routes the calculation (performs a routing of the calculation) to be performed in the calculation unit of the activation function while the approximation parameter l conditions (parameter) this calculation.
- the approximation of said at least two activation functions is carried out by configuring the module for calculating activation functions so that the calculations are carried out in fixed point or floating point.
- the number of bits associated with the fixed-point or floating-point calculations is configured for each layer of the network. So a parameter
- the data processing processor comprises a memory for configuring the network within which parameters (PS, cmd, l) of neural network execution are recorded.
- the invention also relates to a data processing method, said method being implemented by a data processing processor comprising at least one processing memory and a calculation unit, the calculation unit comprises a set of configurable calculation units called configurable neurons, each configurable neuron of the set of configurable neurons comprising a combination function calculation module and an activation function calculation module, the method comprising:
- an initialization step comprising the loading into the processing memory of a set of application data and the loading of a set of data, corresponding to all the synaptic weights and the configurations of the layers in the storage memory of network configuration;
- the execution of the neuron network comprising for each layer, the application of a configuration command, so that said command determines an activation function to be executed from at least two activation executable by the module for calculating activation functions, the execution delivering processed data;
- the execution of the neural network comprises at least one iteration of the following steps, for a current layer of the neural network:
- the invention makes it possible, within a dedicated processor (or else within a specific processing method) to carry out optimizations of the calculations of the nonlinear functions by carrying out factorizations of calculations and approximations which make it possible to reduce the load of calculation of the operations, in particular at the level of the activation function.
- a step of transmitting information and / or a message from a first device to a second device corresponds at least partially , for this second device at a step of receiving the information and / or the message transmitted, whether this reception and this transmission is direct or whether it is carried out by means of other transport, gateway or intermediation, including the devices described herein according to the invention.
- the different steps of the methods according to the invention are implemented by one or more software or computer programs, comprising software instructions intended to be executed by a data processor of an execution device according to the invention and being designed to control the execution of the different steps methods, implemented at the level of the communication terminal, of the electronic execution device and / or of the remote server, within the framework of a distribution of the treatments to be performed and determined by scripted source code.
- the invention also relates to programs, capable of being executed by a computer or by a data processor, these programs comprising instructions for controlling the execution of the steps of the methods as mentioned above.
- a program can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other desirable form.
- the invention also relates to an information medium readable by a data processor, and comprising instructions of a program as mentioned above.
- the information medium can be any entity or device capable of storing the program.
- the support may include a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a means
- magnetic recording for example a mobile medium (memory card) or a hard disk or an SSD.
- the information medium can be a transmissible medium such as an electrical or optical signal, which can be routed via an electrical or optical cable, by radio or by other means.
- the program according to the invention can in particular be downloaded from a network of the Internet type.
- the information medium can be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the process in question.
- the invention is implemented by means of software and / or hardware components.
- module can correspond in this document as well to a software component, as to a hardware component or to a set of hardware and software components.
- a software component corresponds to one or more computer programs, one or more subroutines of a program, or more generally to any element of a program or of software capable of implementing a function or a set of functions, as described below for the module concerned.
- Such a software component is executed by a data processor of a physical entity (terminal, server, gateway, set-top-box, router, etc.) and is likely to access the material resources of this physical entity (memories, recording media, communication bus, electronic input / output cards, user interfaces, etc.).
- a hardware component corresponds to any element of a hardware assembly (or hardware) capable of implementing a function or a set of functions, according to what is described below for the module concerned. It can be a programmable hardware component or with an integrated processor for the execution of software, for example an integrated circuit, a smart card, a memory card, an electronic card for the execution of firmware ( firmware), etc.
- fig 1 describes a processor in which the invention is implemented
- FIG. 1 illustrates the division of the activation function of a configurable neuron according to the invention
- FIG. 3 describes the sequence of blocks in a particular embodiment, for the calculation of a value approaching the activation function
- FIG. 4 describes an embodiment of a data processing method within a neural network according to the invention.
- the layers that make up a neural network implement unit neurons which perform both combination and activation functions which may be different from one network to another.
- a given electronic device such as a smartphone, a tablet or a personal computer
- many different neural networks can be implemented, each of these neural networks being used by different applications or processes. Therefore, for the sake of efficient hardware implementation of such neural networks, it is not possible to have a dedicated hardware component for each type of neural network to be implemented. It is for this reason that, for the most part, current neural networks are implemented in a purely software manner and not in hardware (that is to say directly using instructions from processors).
- the inventors have developed and perfected a specific neuron which can be physically reconfigurable.
- such a neuron can take the proper form in a running neural network. More particularly, in at least one embodiment, the invention takes the form of a generic processor.
- the calculations performed by this generic processor can, depending on embodiments, be performed in fixed point or in floating point. When performed in fixed point, calculations can
- the processor operates with offline learning. It includes a memory comprising in particular: the synaptic weights of the different layers; the choice of the activation function of each layer; as well as configuration and execution parameters of the neurons of each layer. The number of neurons and hidden layers depends on the implementation
- the memory of the processor is sized as a function of the maximum capacity which it is desired to offer to the neural network.
- a structure for memorizing the results of a layer, also present within the processor, makes it possible to reuse the same neurons for several consecutive hidden layers.
- This storage structure is, for the sake of simplification, called temporary storage memory.
- the reconfigurable number of neurons of the component (processor) is also selected according to the maximum number of neurons that it is desired to authorize for a given layer of the neural network.
- FIG 1 briefly illustrates the general principle of the invention.
- a processor includes a plurality of configurable neurons (sixteen neurons are shown in the figure). Each neuron is composed of two distinct units: a unit for calculating the combination function and a unit for calculating the activation function (AFU). Each of these two units is configurable by a command word (cmd). Neurons are addressed by connection buses (CBUS) and connection routes (CROUT). The input data are represented in the form of a vector (X t ) which contains a certain number of input values (eight values in the example). The values are routed in the network to produce eight scalar results (z 0 , ..., z 7 ). Synaptic weights, controls, and adjustment parameter l are described below.
- the invention relates to a data processing processor, said processor comprising at least one processing memory (MEM) and one calculation unit (CU), said processor being characterized in that the calculation unit ( CU) comprises a set of configurable calculation units called configurable neurons, each configurable neuron (NC) of the set of configurable neurons (ENC) comprising a combination function calculation module (MCFC) and a function calculation module activation (MCFA), each activation function calculation module (AFU) comprising a register for receiving a configuration command, so that said command determines an activation function to be executed from at least two functions d activation activated by the activation function calculation module (AFU).
- the processor also includes a memory for storing
- MEMR network configuration
- PS parameters
- cmd, l parameters of neural network execution are recorded.
- This memory can be the same as the processing memory (MEM).
- a configurable neuron of the configurable neural network object of the invention comprises two calculation modules (units) which are configurable: one in charge of the calculation of the combination function and one in charge of the calculation of the activation function.
- the activation function calculation module also called AFU
- the activation function calculation module optimizes the calculations common to all the activation functions, by simplifying and approximating these calculations.
- An illustrative implementation is detailed below. Pictured, the activation function calculation module performs calculations so as to reproduce a result close to that of the chosen activation function, by pooling the calculation parts which are used to reproduce an approximation of the activation function.
- the artificial neuron in this embodiment, is broken down into two configurable elements (modules).
- the first configurable element calculates either the scalar product (most networks) or the Euclidean distance.
- the second element module (module) called UFA (for Activation Function Unit, AFU in tabs) implements the activation functions.
- the first module implements an approximation of the calculation of the square root for the calculation of the Euclidean distance.
- this approximation is made in fixed point, in the case of processors comprising low capacities.
- the UFA allows the use of the sigmoid, the hyperbolic tangent, the Gaussian, the RELU.
- this artificial neuron circuit is parameterized by the reception of a word or of several command words, depending on the embodiment.
- a control word is in the present case a signal, comprising a bit or a series of bits (for example a byte, making it possible to have 256 possible commands or twice 128 commands) which is transmitted to the circuit to configure it.
- the proposed implementation of a neuron makes it possible to create “common” networks just like the latest generation neural networks like ConvNet (convolutional neural network).
- This computing architecture can be implemented, in a practical way, in the form of a software library for standard processors or in the form of hardware implementation for FPGAs or ASICs.
- a configurable neuron is composed of a distance calculation module and / or scalar product which depends on the type of neuron used, and a UFA module.
- a configurable generic neuron like any neuron, includes fixed or floating point input data including:
- X is the input data vector
- W is the vector of the synaptic weights of the neuron
- l which represents the parameter of the sigmoid, the hyperbolic tangent, the Gaussian or else the RELU.
- This parameter is identical for all neurons in a layer.
- This parameter l is supplied to the neuron with the command word, setting the implementation of the neuron.
- This parameter can be qualified as an approximation parameter in the sense that it is used to carry out an approximation calculation of the value of the function from one of the approximation methods presented below.
- the four main functions reproduced (and factored) by the UFA are:
- the first three functions are calculated approximately. This means that the configurable neuron does not implement a precise calculation of these functions, but instead implements an approximation of the calculation of these functions, which makes it possible to reduce the load, the time, and the resources necessary to obtain the result.
- Figure 2 shows the general architecture of the activation function circuit. This functional architecture takes into account the previous approximations (methods 1 to 4) and factorizations in the calculation functions.
- a hardware implementation of a generic neural network with a configurable neural cell which makes it possible to implement any neural network including the convnet.
- AFU in the form of software library for standard processors or for FPGAs.
- AFU integration in the form of a hardware architecture for all standard processors or for FPGAs or ASICs.
- the “basic operation” is no longer a standard mathematical operation like the addition and multiplication found in all conventional processors, but the sigmoid function of the absolute value of ⁇ x.
- This “basic operation”, in this embodiment, is common to all the other non-linear functions. In this embodiment, an approximation of this function is used. We therefore use here an approximation of a high-level function to perform the calculations of high-level functions without using conventional methods of calculating these functions.
- the result for a positive value of x of the sigmoid is deduced from this basic operation using the symmetry of the sigmoid function.
- the hyperbolic tangent function is obtained by using the standard correspondence relation which links it to the sigmoid function.
- the Gaussian function is obtained by passing through the derivative of the sigmoid which is an approximate curve of the Gaussian, the derivative of the sigmoid is obtained by a product between the sigmoid function and its symmetric.
- the RELU function which is a linear function for positive x does not use the basic operation of the computation of nonlinear functions.
- the leaky RELU function which uses a linear proportionality function for negative x does not use the basic operation of calculating non-linear functions either.
- This block performs a multiplication operation whatever the format of representation of the reals. Any method of
- the division may or may not be included in the AFU.
- the blocks n ° 2 to 4 carry out the calculation of the “basic operation” of the nonlinear functions with the exception of the RELU and leakyRELU functions which are linear functions with different coefficients of proportionality depending on whether x is negative or positive.
- This basic operation uses a line segment approximation of the sigmoid function for a negative value of the absolute value of x.
- These blocks can be grouped by two or three depending on the desired optimization. Each line segment is defined on an interval lying between the integer part of x is the integer part plus one of x:
- n ° 2 named separator, extracts the integer part, takes the absolute value, this can also result in the absolute value of the integer part by default of x:
- the truncated part provided by this block gives the start of the segment and the fractional part represents the line defined on this segment. The separation of the whole part and the fractional part can be obtained in any possible way and whatever the representation format of x.
- block n ° 4 calculates the value common to all the functions yi from the numerator y n supplied by block n ° 3 and the integer part supplied by block n ° 2. This block calculates the
- Block n ° 5 calculates the result of the nonlinear function which depends on the value of the command word cmd, on the value of the sign of x and of course on the result yi of block n ° 4.
- the Gaussian z 4y x (l - y 1 ) whatever the sign of x.
- the approach of the Gaussian is carried out using the derivative of the sigmoid. With this method we obtain a curve close to the Gaussian function.
- the derivative of the sigmoid is calculated simply by multiplying the result of the basic operation by its symmetric.
- the parameter l defines the standard deviation of the Gaussian by dividing 1.7 by l. This division operation may or may not be included in the AFU.
- this calculation uses a multiplication with two operands and by a power of two.
- block n ° 5 is a block which contains the various final calculations of the nonlinear functions described above, as well as a switching block which performs the choice of operation according to the value of the control signal and the value of the sign of x. 5.3. Description of an embodiment of a dedicated component capable of implementing a plurality of different neural networks, data processing method.
- the component comprising a set of 16384 reconfigurable neurons is positioned on the processor.
- Each of these neurons is positioned on the processor.
- the reconfigurables receives its data directly from the temporary storage memory, which comprises at least 16,384 entries (or at least 32,768, depending on the embodiments), each entry value corresponding to one byte.
- the size of the temporary storage memory is therefore 16KB (or 32KB) (kilobytes). Depending on the operational implementation, the size of the temporary storage memory can be increased to facilitate the process of rewriting the result data.
- the component also includes a memory for storing the configuration of the neural network. In this example, it is assumed that the configuration storage memory is dimensioned to allow the implementation of 20 layers, each of these layers potentially comprising a number of synaptic weights corresponding to the total number of possible entries, ie 16384 different synaptic weights for each. layers, each one byte in size.
- each layer there are also at least two command words, each of a length of one byte, or a total of 16,386 bytes per layer, and therefore for the 20 layers, a minimum total of 320 kb.
- This memory also includes a set of registers dedicated to the storage of data representative of the configuration of the network: number of layers, number of neurons per layer, ordering of the results of a layer, etc. The entire component therefore requires in this configuration, a memory size of less than 1 MB.
- a set of data (EDAT), corresponding for example to a set of application data coming from a given hardware or software application is loaded into the temporary storage memory (MEM).
- EDAT a set of data
- MEM temporary storage memory
- the neural network is then executed (step 1) by the processor of the invention, according to an iterative implementation (as long as the current layer is less than the number of layers of the network, ie nblyer), of the following steps executed for a given layer of the neural network, from the first layer to the last layer, and comprising for a current layer: transmission (10) of the first command word to all of the neurons implemented, defining the combination function implemented (linear combination or Euclidean standard) for the current layer;
- SDAT final results
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Advance Control (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR1873141A FR3090163B1 (fr) | 2018-12-18 | 2018-12-18 | Processeur de traitement de données, procédé et programme d’ordinateur correspondant |
| PCT/EP2019/083891 WO2020126529A1 (fr) | 2018-12-18 | 2019-12-05 | Processeur de traitement de donnees, procede et programme d'ordinateur correspondant. |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP3899800A1 true EP3899800A1 (fr) | 2021-10-27 |
Family
ID=66867241
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP19813025.4A Withdrawn EP3899800A1 (fr) | 2018-12-18 | 2019-12-05 | Processeur de traitement de donnees, procede et programme d'ordinateur correspondant |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20220076103A1 (fr) |
| EP (1) | EP3899800A1 (fr) |
| CN (1) | CN113272826A (fr) |
| FR (1) | FR3090163B1 (fr) |
| WO (1) | WO2020126529A1 (fr) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11630990B2 (en) * | 2019-03-19 | 2023-04-18 | Cisco Technology, Inc. | Systems and methods for auto machine learning and neural architecture search |
| US20220327370A1 (en) * | 2021-04-12 | 2022-10-13 | Sigmasense, Llc. | Hybrid Low Power Analog to Digital Converter (ADC) Based Artificial Neural Network (ANN) with Analog Based Multiplication and Addition |
| US20240311622A1 (en) * | 2023-03-17 | 2024-09-19 | Qualcomm Incorporated | Selectable data-aware activation functions in neural networks |
| CN118944108B (zh) * | 2024-10-14 | 2025-03-14 | 湖南西来客储能科技有限公司 | 一种储能装置的负载平衡调控方法及系统 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5361326A (en) * | 1991-12-31 | 1994-11-01 | International Business Machines Corporation | Enhanced interface for a neural network engine |
| DE102016216944A1 (de) * | 2016-09-07 | 2018-03-08 | Robert Bosch Gmbh | Verfahren zur Berechnung einer Neuronenschicht eines mehrschichtigen Perzeptronenmodells mit vereinfachter Aktivierungsfunktion |
| US11995532B2 (en) * | 2018-12-05 | 2024-05-28 | Arm Limited | Systems and devices for configuring neural network circuitry |
-
2018
- 2018-12-18 FR FR1873141A patent/FR3090163B1/fr not_active Expired - Fee Related
-
2019
- 2019-12-05 US US17/414,628 patent/US20220076103A1/en not_active Abandoned
- 2019-12-05 WO PCT/EP2019/083891 patent/WO2020126529A1/fr not_active Ceased
- 2019-12-05 EP EP19813025.4A patent/EP3899800A1/fr not_active Withdrawn
- 2019-12-05 CN CN201980084061.1A patent/CN113272826A/zh active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2020126529A1 (fr) | 2020-06-25 |
| CN113272826A (zh) | 2021-08-17 |
| FR3090163B1 (fr) | 2021-04-30 |
| FR3090163A1 (fr) | 2020-06-19 |
| US20220076103A1 (en) | 2022-03-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020126529A1 (fr) | Processeur de traitement de donnees, procede et programme d'ordinateur correspondant. | |
| US12039769B2 (en) | Identifying a type of object in a digital image based on overlapping areas of sub-images | |
| EP3449423B1 (fr) | Dispositif et procede de calcul de convolution d'un reseau de neurones convolutionnel | |
| EP0322966B1 (fr) | Circuit et structure de réseau de neurones | |
| FR3057090A1 (fr) | Procedes d'apprentissage securise de parametres d'un reseau de neurones a convolution, et de classification securisee d'une donnee d'entree | |
| EP3394797A1 (fr) | Circuit neuronal optimise, architecture et procede pour l'execution des reseaux de neurones | |
| FR2685109A1 (fr) | Processeur numerique neuronal operant avec une approximation d'une fonction d'activation non lineaire. | |
| EP3616132A1 (fr) | Procédé et dispositif automatisés aptes à assurer l'invariance perceptive d'un évènement spatio-temporel dynamiquement en vue d'en extraire des représentations sémantiques unifiées | |
| EP0514986B1 (fr) | Procédé d'apprentissage d'un réseau de neurones et dispositif de classification pour la mise en oeuvre de ce procédé | |
| WO2020190546A1 (fr) | Quantification de valeurs aberrantes pour l'apprentissage et l'inférence | |
| EP0875032B1 (fr) | Procede d'apprentissage generant des reseaux de neurones de petites tailles pour la classification de donnees | |
| WO2015090885A1 (fr) | Module de traitement du signal, notamment pour reseau de neurones et circuit neuronal. | |
| EP3663987A1 (fr) | Procédé et dispositif de détermination de la taille mémoire globale d'une zone mémoire globale allouée aux données d'un réseau de neurones | |
| EP4162409A1 (fr) | Procédé de génération d'un système d'aide à la décision et systèmes associés | |
| FR2690771A1 (fr) | Processeur neuronal muni de moyens pour normaliser des données. | |
| EP3712775A1 (fr) | Procédé et dispositif de détermination de la taille mémoire globale d'une zone mémoire globale allouée aux données d'un réseau de neurones compte tenu de sa topologie | |
| EP4202770A1 (fr) | Reseau de neurones avec generation a la volee des parametres du reseau | |
| EP3764286A1 (fr) | Procédé et outil informatique de détermination de fonctions de transferts entre des paires de couches successives d'un réseau de neurones | |
| WO2023237498A1 (fr) | Dispositif de traitement de donnees par voie d'apprentissage, procede, programme et systeme correspondant | |
| EP0378663B1 (fr) | Dispositif de traitement de signal adaptatif et non lineaire | |
| WO2024175552A1 (fr) | Composant électronique multifonctions, dispositif, procédés et programmes correspondants | |
| EP4538933A1 (fr) | Procédé de compilation d'un réseau de neurones artificiels entraîné | |
| EP4641448A1 (fr) | Reseau de neurones binaires | |
| EP0619558B1 (fr) | Procédé et appareil de classification de configurations de signaux | |
| FR3156568A1 (fr) | Procédé de prédiction des caractéristiques physiques actuelles ou futures d’une partie d’un corps humain et dispositifs associés |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20210521 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
| 18D | Application deemed to be withdrawn |
Effective date: 20230701 |