Science & Research To accomplish this task and to automatically generate a relevant number of different additional spring geometries, a Python script was programmed. Based on the initial spring, this script varies the two splines in certain intervals. An additional feature that was mapped in the script are limits that must not be exceeded, otherwise unfinishable, implausible or geometrically impossible spring designs would be described. Finally, 14782 different spring geometries were generated. The second part of the necessary training data consists of the behavior of the geometries under load. For this purpose, BASF's existing simulation could be used. Since the simulation of a spring can take up to five minutes, this limits the maximum amount of test data within the project duration. Furthermore, manual effort is required to read in, start and make available the results. With the available data, the training of the model can now begin. Since the model is to learn the relationship between requirements and geometry, the simulated behavior in the form of a load-deflection curve of the additional spring is defined as a requirement for the training, since the geometry that exactly matches it is known. This, or rather its spline parameters, are used as a label for the training data set, i.e. as the target value the machine learning model is to learn. In addition to the char- acteristic curve, the dimensions of the additional spring are also included in the training. One problem is to prepare the simulated load curves as requirements for the training. Specifying the x and y values of the spring characteristic in each case would end up in a very Figure 4: Approximation of a spring geometry by two spline curves with their respective points. A represent the points of the outer geometry and B describe the inner geometry 48 ProductDataJournal 2023-1 large number of parameters. Therefore, the load curve must be approximated mathematically. In order to describe an appropri- ately representative characteristic curve, a 5th degree polynomial is used. Its coefficients are used in the further training process. A feed forward neural network was selected as the model architecture, since there is no content correlation between the individual samples. The dataset was split into a training and test dataset, and hyperparameter tuning – the systematic testing of different settings and parameters – was used to determine a neural network with 4 layers as the most useful expression of the model. Hyperparameter tuning and statistical evaluation were performed by calculating the mean square deviation between the geometry predicted by the model and the expected geo - metry, i.e., the label. The trained model that had the lowest mean squared deviation on the test data was selected to be evaluated for content in the following sections. Results and discussion Ten simulated behaviors and their designs were randomly selected to assess whether and to what extent the model is capable of predicting jounce bumper geometries based on requirements. The simulated characteristics were passed into the trained model as requirements. From this, a geometry was predicted. This also had to be simulated in order to be able to compare both load curves, the one generated by the model and the one of the requirement. Figure 5 shows two load- deflection curves of the generated geometries representative of the ten randomly selected results. Overall, the results split into two groups. In the left plot in Figure 5, the characteristics of the requirements (green) and the generated design (blue) are relatively close to each other. In the second group (on the right in Figure 5), the curve is relatively similar, but there is a relatively large offset between the two curves. On the positive side, the ML model is able to approxi- mate the desired behaviors and, in some cases, comes very close. On the downside, however, in a group of cases the large gap between the two load curves is clearly visible. However, this has its cause in the preparation variant used and not directly in the machine learning model. The green curve shows the original requirement characteristic curve, while the one in orange shows the polynomially approximated one. Again, you can already see the big difference. As in the concept in Figure 3, the model tries to generate a geometry based on the approximated curve. However, since this approximation is relatively far from the original desired requirements, the model also predicts a geometry that is close to the approximated curve, but not to the actual requirements. Thus, the largest source of deviation comes from the inaccurate approximation of spring behavior using a polynomial. Overall, however, based on the samples, it can be seen that the trained model is capable of generating geometries that approx- imate the requirements and thus the approach presented here seems possible.