29 Matching Annotations
  1. Jun 2022
    1. Goals

      Here you should describe the funture situation. There is some LDAP that has the users, a Keycloak where clients can connect to to get their sessions keys/tokens, maybe a redis/postgres that has additional user information. that can be used for user statistics.

    2. Currently iBet

      This is not nearly enough about iBet. I could not understand how it works.

      What about User, Extended User, Chipcard, Customercard, Barcodes, etc. All this has to be supported in the end by your proposal. If have a new shiny system and we can not map it to the current iBet implementation then we have gained nothing.

    3. The

      I think you should start with some sort of abstract, that states the problem.

      .. currently users and permissions are scattered between multiple secondary servers. Because of this the users are only accessible via the ibet server. A consequence of this it is that it is hard to change or add new permissions or connect new clients because such an operations requires C++ code changes. Furthermore we most probably need to identify across different brands in the future for legal, statistical and other reasons. Therefor we need ....

    4. Device authorization

      why is that? we need to handle automats as well without that we can just stop with this RFC because it will solve some problem but not ours.

    5. A single instance of the software and its supporting infrastructure that serves multiple customers.

      without exposing data to each other

    6. Separates the users from the databases of the services.

      unique users that are vaild across services, to avoid that people need to register multiple accounts for different services

  2. Apr 2021
    1. we do not have a performance issue, we have a scalability issue

      What does that mean?

      Could you describe in a bit more detail what the underlying issue is? And how you intend to solve it?

      • Do we have a timing / latency issue?
      • Would the call to the new service be synchronous or not?
      • Is it possible to have async code in the server without slowing other operations down?
      • How are other things like risk evaluation impacted? (can we ensure that one user does not get processed on several service nodes concurrently allowing him to go over limits?)

      Form my gut feeling the server is not async enough. I fear that we might need to improve the server first before we can use an external service that introduces even more latency. But that could turn out to be a major hard to do change.

      Would it be more feasible to start with other parts that are read only like statistics (transactions can be re-executed and are not time critical) or a service that processes bet radar information (writing only - transactions are not expected to fail). Do not get me wrong I really like your idea but here we deal with a part that needs to read and write time critical information, which is probably the hardest part to get right. I would feel more comfortable if we could exercise with some easier part first.

  3. Mar 2021
    1. iBet technical metrics, which relate to the inner technical workings of the iBet system, e.g. client connection count, client disconnection counts, number of processed RMI calls, etc. iBet business metrics, which relate to the sports-betting business, e.g.number of evaluated bettingcards, number of bettingcard whose overview was calculated, cashout count, cancellations, etc

      What is the difference between an RMI that cancels and the cancel business metric? When and where is the cancellation technical and when is it more business related?

    2. Aggregator

      I am not sure If we want to do that in iBet. I think it would be good to have the full data in the system that you eventually use for visualization. Therefor the aggregation in ibet does not help you too much and you have to transfer the 10 seconds data anyway.

    3. Storing and Aggregating metrics

      I am missing some sort of motivation here?

      We need to to store / collect data locally before sending it to a database because, creating network traffic for every single event is not cost efficient and we would spent too much time preparing and sending messages. Therefore we want only update xyz in discrete steps which make it possible to transfer the data into another system.

    4. a number of current connections by type: SecureNet::newConnection() and SecureConn::stateChanged() function can be probed to gather the metrics about connected secondary servers. Established connections between primary and secondary servers:: Client-Server established connections already can be gathered from SecureNet::newConnection() and SecureConn::stateChanged(), but about the connection with secondary servers, PrimarySecondaryClient class is the correct place to probe as far as I investigated. void PrimarySecondaryClient::connected() function establishes a connection between primary and secondary servers; therefore, probing to this point is a good idea.

      Maybe other servers could just send their data with a label to a DB as the primary could do. That way we do not need any complicated collection process and all servers are treated the same.

    5. util/threadloop.cpp

      This is a good point and it gives us a good overview how many times something has been called and what the min/max/average time was.

      GOOD!

    6. processedBettingcardsSinceStartNumber of processed/evaluated bettingcards since server-start processedBettingcardsWithin<TimeSpan>Number of processes/evaluated bettingcards within the last 1|5|15|60 minutes acceptedBetsSinceStartNumber of accepted bet since server-start acceptedCashoutsSinceStartNumber of acceptedCashouts since server-start rejectedCashoutsSinceStart

      They look like they cloud be mapped to handlers or aggregations of handlers.

    7. Metrics data needs to be shown for the last 1, 5, and 15 minutes, like in the “top” load average.

      should be solved externally - imo

    8. Show metrics data basically inside iBet

      What does that mean? Shouldn't we just pump all data in an external time series database?

    9. No proposed solutions in this document are intended as standards

      It should be in the long run, otherwise this is a waste of time.

      I think we should have the some sort of status for this kind of document like proposal and accepted (tipster standard).

      Do not make this document smaller in the first sentence than it actually is. Let's aim high.