127 Matching Annotations
  1. Sep 2024
    1. Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?Emissions from in-house data centers of Google, Microsoft, Meta and Apple may be 7.62 times higher than official tallyIsabel O'BrienSun 15 Sep 2024 17.00 CESTLast modified on Wed 18 Sep 2024 22.40 CESTShareBig tech has made some big claims about greenhouse gas emissions in recent years. But as the rise of artificial intelligence creates ever bigger energy demands, it’s getting hard for the industry to hide the true costs of the data centers powering the tech revolution.According to a Guardian analysis, from 2020 to 2022 the real emissions from the “in-house” or company-owned data centers of Google, Microsoft, Meta and Apple are probably about 662% – or 7.62 times – higher than officially reported.Amazon is the largest emitter of the big five tech companies by a mile – the emissions of the second-largest emitter, Apple, were less than half of Amazon’s in 2022. However, Amazon has been kept out of the calculation above because its differing business model makes it difficult to isolate data center-specific emissions figures for the company.As energy demands for these data centers grow, many are worried that carbon emissions will, too. The International Energy Agency stated that data centers already accounted for 1% to 1.5% of global electricity consumption in 2022 – and that was before the AI boom began with ChatGPT’s launch at the end of that year.AI is far more energy-intensive on data centers than typical cloud-based applications. According to Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data center power demand will grow 160% by 2030. Goldman competitor Morgan Stanley’s research has made similar findings, projecting data center emissions globally to accumulate to 2.5bn metric tons of CO2 equivalent by 2030.In threat to climate safety, Michigan to woo tech data centers with new lawsRead moreIn the meantime, all five tech companies have claimed carbon neutrality, though Google dropped the label last year as it stepped up its carbon accounting standards. Amazon is the most recent company to do so, claiming in July that it met its goal seven years early, and that it had implemented a gross emissions cut of 3%.“It’s down to creative accounting,” explained a representative from Amazon Employees for Climate Justice, an advocacy group composed of current Amazon employees who are dissatisfied with their employer’s action on climate. “Amazon – despite all the PR and propaganda that you’re seeing about their solar farms, about their electric vans – is expanding its fossil fuel use, whether it’s in data centers or whether it’s in diesel trucks.”A misguided metricThe most important tools in this “creative accounting” when it comes to data centers are renewable energy certificates, or Recs. These are certificates that a company purchases to show it is buying renewable energy-generated electricity to match a portion of its electricity consumption – the catch, though, is that the renewable energy in question doesn’t need to be consumed by a company’s facilities. Rather, the site of production can be anywhere from one town over to an ocean away.Recs are used to calculate “market-based” emissions, or the official emissions figures used by the firms. When Recs and offsets are left out of the equation, we get “location-based emissions” – the actual emissions generated from the area where the data is being processed.The trend in those emissions is worrying. If these five companies were one country, the sum of their “location-based” emissions in 2022 would rank them as the 33rd highest-emitting country, behind the Philippines and above Algeria.Many data center industry experts also recognize that location-based metrics are more honest than the official, market-based numbers reported.“Location-based [accounting] gives an accurate picture of the emissions associated with the energy that’s actually being consumed to run the data center. And Uptime’s view is that it’s the right metric,” said Jay Dietrich, the research director of sustainability at Uptime Institute, a leading data center advisory and research organization.Nevertheless, Greenhouse Gas (GHG) Protocol, a carbon accounting oversight body, allows Recs to be used in official reporting, though the extent to which they should be allowed remains controversial between tech companies and has led to a lobbying battle over GHG Protocol’s rule-making process between two factions.On one side there is the Emissions First Partnership, spearheaded by Amazon and Meta. It aims to keep Recs in the accounting process regardless of their geographic origins. In practice, this is only a slightly looser interpretation of what GHG Protocol already permits.The opposing faction, headed by Google and Microsoft, argues that there needs to be time-based and location-based matching of renewable production and energy consumption for data centers. Google calls this its 24/7 goal, or its goal to have all of its facilities run on renewable energy 24 hours a day, seven days a week by 2030. Microsoft calls it its 100/100/0 goal, or its goal to have all its facilities running on 100% carbon-free energy 100% of the time, making zero carbon-based energy purchases by 2030.Google has already phased out its Rec use and Microsoft aims to do the same with low-quality “unbundled” (non location-specific) Recs by 2030.Academics and carbon management industry leaders alike are also against the GHG Protocol’s permissiveness on Recs. In an open letter from 2015, more than 50 such individuals argued that “it should be a bedrock principle of GHG accounting that no company be allowed to report a reduction in its GHG footprint for an action that results in no change in overall GHG emissions. Yet this is precisely what can happen under the guidance given the contractual/Rec-based reporting method.”To GHG Protocol’s credit, the organization does ask companies to report location-based figures alongside their Rec-based figures. Despite that, no company includes both location-based and market-based metrics for all three subcategories of emissions in the bodies of their annual environmental reports.In fact, location-based numbers are only directly reported (that is, not hidden in third-party assurance statements or in footnotes) by two companies – Google and Meta. And those two firms only include those figures for one subtype of emissions: scope 2, or the indirect emissions companies cause by purchasing energy from utilities and large-scale generators.In-house data centersScope 2 is the category that includes the majority of the emissions that come from in-house data center operations, as it concerns the emissions associated with purchased energy – mainly, electricity.Data centers should also make up a majority of overall scope 2 emissions for each company except Amazon, given that the other sources of scope 2 emissions for these companies stem from the electricity consumed by firms’ offices and retail spaces – operations that are relatively small and not carbon-intensive. Amazon has one other carbon-intensive business vertical to account for in its scope 2 emissions: its warehouses and e-commerce logistics.For the firms that give data center-specific data – Meta and Microsoft – this holds true: data centers made up 100% of Meta’s market-based (official) scope 2 emissions and 97.4% of its location-based emissions. For Microsoft, those numbers were 97.4% and 95.6%, respectively.The huge differences in location-based and official scope 2 emissions numbers showcase just how carbon intensive data centers really are, and how deceptive firms’ official emissions numbers can be. Meta, for example, reports its official scope 2 emissions for 2022 as 273 metric tons CO2 equivalent – all of that attributable to data centers. Under the location-based accounting system, that number jumps to more than 3.8m metric tons of CO2 equivalent for data centers alone – a more than 19,000 times increase.A similar result can be seen with Microsoft. The firm reported its official data center-related emissions for 2022 as 280,782 metric tons CO2 equivalent. Under a location-based accounting method, that number jumps to 6.1m metric tons CO2 equivalent. That’s a nearly 22 times increase.While Meta’s reporting gap is more egregious, both firms’ location-based emissions are higher because they undercount their data center emissions specifically, with 97.4% of the gap between Meta’s location-based and official scope 2 number in 2022 being unreported data center-related emissions, and 95.55% of Microsoft’s.Specific data center-related emissions numbers aren’t available for the rest of the firms. However, given that Google and Apple have similar scope 2 business models to Meta and Microsoft, it is likely that the multiple on how much higher their location-based data center emissions are would be similar to the multiple on how much higher their overall location-based scope 2 emissions are.In total, the sum of location-based emissions in this category between 2020 and 2022 was at least 275% higher (or 3.75 times) than the sum of their official figures. Amazon did not provide the Guardian with location-based scope 2 figures for 2020 and 2021, so its official (and probably much lower) numbers were used for this calculation for those years.Third-party data centersBig tech companies also rent a large portion of their data center capacity from third-party data center operators (or “colocation” data centers). According to the Synergy Research Group, large tech companies (or “hyperscalers”) represented 37% of worldwide data center capacity in 2022, with half of that capacity coming through third-party contracts. While this group includes companies other than Google, Amazon, Meta, Microsoft and Apple, it gives an idea of the extent of these firms’ activities with third-party data centers.Those emissions should theoretically fall under scope 3, all emissions a firm is responsible for that can’t be attributed to the fuel or electricity it consumes.When it comes to a big tech firm’s operations, this would encapsulate everything from the manufacturing processes of the hardware it sells (like the iPhone or Kindle) to the emissions from employees’ cars during their commutes to the office.When it comes to data centers, scope 3 emissions include the carbon emitted from the construction of in-house data centers, as well as the carbon emitted during the manufacturing process of the equipment used inside those in-house data centers. It may also include those emissions as well as the electricity-related emissions of third-party data centers that are partnered with.However, whether or not these emissions are fully included in reports is almost impossible to prove. “Scope 3 emissions are hugely uncertain,” said Dietrich. “This area is a mess just in terms of accounting.”According to Dietrich, some third-party data center operators put their energy-related emissions in their own scope 2 reporting, so those who rent from them can put those emissions into their scope 3. Other third-party data center operators put energy-related emissions into their scope 3 emissions, expecting their tenants to report those emissions in their own scope 2 reporting.Additionally, all firms use market-based metrics for these scope 3 numbers, which means third-party data center emissions are also undercounted in official figures.Of the firms that report their location-based scope 3 emissions in the footnotes, only Apple has a large gap between its official scope 3 figure and its location-based scope 3 figure.This is the only sizable reporting gap for a firm that is not data center-related – the majority of Apple’s scope 3 gap is due to Recs being applied towards emissions associated with the manufacturing of hardware (such as the iPhone).Apple does not include transmission and distribution losses or third-party cloud contracts in its location-based scope 3. It only includes those figures in its market-based numbers, under which its third party cloud contracts report zero emissions (offset by Recs). Therefore in both of Apple’s total emissions figures – location-based and market-based – the actual emissions associated with their third party data center contracts are nowhere to be found.”.2025 and beyondEven though big tech hides these emissions, they are due to keep rising. Data centers’ electricity demand is projected to double by 2030 due to the additional load that artificial intelligence poses, according to the Electric Power Research Institute.Google and Microsoft both blamed AI for their recent upticks in market-based emissions.“The relative contribution of AI computing loads to Google’s data centers, as I understood it when I left [in 2022], was relatively modest,” said Chris Taylor, current CEO of utility storage firm Gridstor and former site lead for Google’s data center energy strategy unit. “Two years ago, [AI] was not the main thing that we were worried about, at least on the energy team.”Taylor explained that most of the growth that he saw in data centers while at Google was attributable to growth in Google Cloud, as most enterprises were moving their IT tasks to the firm’s cloud servers.Whether today’s power grids can withstand the growing energy demands of AI is uncertain. One industry leader – Marc Ganzi, the CEO of DigitalBridge, a private equity firm that owns two of the world’s largest third-party data center operators – has gone as far as to say that the data center sector may run out of power within the next two years.And as grid interconnection backlogs continue to pile up worldwide, it may be nearly impossible for even the most well intentioned of companies to get new renewable energy production capacity online in time to meet that demand. This article was amended on 18 September 2024. Apple contacted the Guardian after publication to share that the firm only did partial audits for its location-based scope 3 figure. A previous version of this article erroneously claimed that the gap in Apple’s location-based scope 3 figure was data center-related.

      La differenza tra il consumo misurato su certificati verdi e ilvero consumo dei data center mondiali

  2. Jul 2024
    1. With Cloud Run gen1 and Go apps, we're getting sub-1s start times. With Node.js or Python, start times are dismal, ie. 5s or more. That's 5-10 times slower than starting these services locally. Although I haven't checked this rigorously, it seems that reading many files is very slow on Cloud Run, and the large number of imports these apps have don't play well with Cloud Run. With Gen.2, for a Go app, startup time went from 500ms to 3-4s, which is more than I expected, and support wasn't able to tell us whether that's normal. That's kind of a shame because otherwise it's one of the best services I used.

      cloud run cold start

    1. Mark Boost, CEO of UK-based cloud company Civo is similarly disappointed in the outcome.Boost said the deal is "not good news" for the cloud industry, and added several important questions still need to be answered."We need to know more about how the process of compensation will work," he said. "Will all cloud providers in Europe be compensated, or just CISPE members? Is this a process that will be arbitrated by Microsoft? Where are the regulators in this?"Boost added that the deal will benefit CISPE members only in the short term, but that the cloud industry and its customers will pay the price in the long-term.

      This is a bit like how AWS will share sustainabilty data under NDA - works for that provider, but not everyone else.

  3. Jun 2024
    1. "interoperable and openly accessible European data processing ecosystem, known as IPCEI-CIS. This initiative aims to reduce reliance on external providers and promote open source technologies" "The European Union has approved a €1.2 billion investment" - [ ] zoek het IPCEI-ICS project op #geonovumtb en kijk nr relevantie voor bijv SIMPL en andere DS initiatieven. #10mins #elaag #pmiddel

    1. That’s no small task: In 2022, SAP spent approximately €7.2 billion in purchases from more than 13,000 suppliers worldwide, its annual report shows; 30% of that being on cloud services – more on that below.

      7.2bn - nearly 2bn is spend on cloud, all by themselves?

  4. May 2024
  5. Mar 2024
    1. Résumé de la Vidéo

      La première partie de la vidéo présente Laurine Gouin, responsable du programme Solidatek, qui est accompagnée d'Alberto et d'Ismaël de DXC. Ils discutent de l'importance du numérique pour les associations et introduisent le sujet du Cloud Computing dans le secteur associatif. Ils expliquent comment le Cloud peut aider les associations à gérer leurs ressources informatiques de manière plus efficace et économique.

      Points Forts: 1. Introduction et contexte [00:00:03][^1^][1] * Présentation de Laurine Gouin et des intervenants * Objectif du webinaire sur le Cloud Computing * Importance du numérique pour les associations 2. Le programme Solidatek [00:02:08][^2^][2] * Présentation de Solidatek, un programme de solidarité numérique * Services offerts aux associations pour renforcer leur impact grâce au numérique * Accès à des logiciels et matériel informatique à tarif réduit 3. Le Cloud Computing [00:06:06][^3^][3] * Introduction aux notions de base du Cloud * Comparaison du Cloud avec la gestion d'une pizzeria * Avantages du Cloud pour les associations 4. Responsabilités et services du Cloud [00:17:35][^4^][4] * Différents modèles de services Cloud : IaaS, PaaS, SaaS * Partage des responsabilités entre le fournisseur Cloud et l'utilisateur * Exemples de services Cloud courants et leur utilité pour les associations Résumé de la vidéo

      La deuxième partie de la vidéo se concentre sur les différentes formes de cloud computing et leur pertinence pour les associations et les entreprises. Elle aborde l'identité numérique des utilisateurs, les options de cloud public, privé et hybride, ainsi que les coûts et la sécurité associés à chaque type. La vidéo explique également l'importance de la conformité avec le RGPD et les lois françaises en matière de données personnelles, soulignant le rôle des acteurs institutionnels et privés dans la transformation numérique du secteur public.

      Points saillants : 1. Identité numérique et options de cloud [00:24:40][^1^][1] * Explique l'identité numérique des utilisateurs * Décrit les solutions de cloud public, privé et hybride * Discute de la flexibilité du cloud et des options disponibles 2. Sécurité et conformité réglementaire [00:27:01][^2^][2] * Souligne l'importance de la conformité avec le RGPD * Présente les lois françaises sur la protection des données * Mentionne le rôle de la CNIL et d'autres organismes régulateurs 3. Stratégie cloud et acteurs du secteur public [00:29:27][^3^][3] * Aborde la stratégie cloud du secteur public français * Identifie les acteurs clés de la transformation numérique * Discute des défis liés à l'obsolescence de l'infrastructure et à la sécurité 4. Migration vers le cloud pour les associations [00:43:12][^4^][4] * Conseille sur la migration vers le cloud pour les associations * Énumère les avantages tels que l'efficacité des coûts et la flexibilité * Met en évidence l'importance de la formation et de l'adoption du personnel Résumé de la vidéo

      Cette vidéo est la troisième partie d'une série sur la migration vers le cloud. Elle met l'accent sur l'importance de ne pas négliger la prise en main par les utilisateurs, de mettre en place une stratégie de migration complète avec des objectifs clairs, d'adopter une approche progressive, et de surveiller et optimiser la période post-migration.

      Points saillants : 1. Importance de la formation des utilisateurs [00:49:01][^1^][1] * Souligne le changement majeur pour les utilisateurs * Nécessité d'une prise en main facile * Attention particulière requise 2. Stratégie de migration complète [00:49:20][^2^][2] * Définir des objectifs et priorités * Importance de la clarté des objectifs * Préparation face aux problèmes potentiels 3. Approche progressive de la migration [00:50:02][^3^][3] * Flexibilité de la stratégie * Adaptation aux changements et événements * Importance de l'évolution de la stratégie 4. Surveillance et optimisation post-migration [00:50:36][^4^][4] * Suivi des performances * Initiatives d'amélioration continue * Récolte de retours utilisateurs pour ajustements

    1. Résumé de la Vidéo

      La vidéo présente une conférence sur les troubles des conduites alimentaires, leur compréhension et leur traitement. La conférencière, Nathalie, discute de la complexité de ces troubles, souvent mal compris par les psychiatres et les médecins somaticiens. Elle souligne l'importance de la recherche clinique et de l'évolution des connaissances sur ces 20 dernières années. Nathalie encourage également les étudiants en médecine à choisir la psychiatrie, une discipline qui sauve des vies.

      Points Forts: 1. Introduction et complexité des troubles alimentaires [00:00:19][^1^][1] * Présentation de Nathalie et du sujet * Difficulté de compréhension des troubles * Importance de la recherche clinique 2. Rôle de l'alimentation et régulation du comportement alimentaire [00:02:26][^2^][2] * Alimentation comme besoin vital et plaisir * Régulation par le système nerveux central * Influence de la génétique et des facteurs environnementaux 3. Impact du stress et de l'anxiété sur l'alimentation [00:04:32][^3^][3] * Variabilité de la réponse alimentaire au stress * Lien entre stress, anxiété, émotions négatives et troubles alimentaires * Importance de la prévention et de l'intervention précoce 4. Classification et compréhension des troubles des conduites alimentaires [00:07:56][^4^][4] * Définition et types de troubles alimentaires * Distinction entre l'obésité et les troubles alimentaires * Évolution des connaissances et des traitements 5. Conséquences et traitement des troubles alimentaires [00:13:40][^5^][5] * Morbidité somatique et psychiatrique * Importance d'une intervention précoce et adaptée * Développement de filières de soins spécialisées Résumé de la vidéo

      La vidéo aborde les troubles alimentaires, en particulier la boulimie, l'hyperphagie boulimique et l'anorexie mentale. Elle souligne l'importance d'une approche multidisciplinaire pour le traitement et la prévention de ces troubles. Le modèle biopsychosocial est présenté comme essentiel pour comprendre et traiter les troubles alimentaires, en tenant compte des facteurs génétiques, environnementaux et psychologiques.

      Points forts : 1. Boulimie et hyperphagie boulimique [00:20:53][^1^][1] * Fréquence des crises de boulimie * Absence de stratégie de contrôle de poids * Lien avec l'obésité et complications somatiques 2. Anorexie mentale [00:22:06][^2^][2] * Moins fréquent mais plus mortel * Touche principalement les jeunes femmes * Risque suicidaire et complications somatiques 3. Prévalence des troubles alimentaires [00:22:42][^3^][3] * 5 à 10 % de la population touchée * 900 000 personnes souffrent de troubles sévères en France * Importance du repérage précoce et de la prise en charge 4. Approche multidisciplinaire [00:25:01][^4^][4] * Nécessité d'une collaboration entre professionnels de santé * Importance de la formation et de la destigmatisation * Développement de filières de soins spécialisés 5. Traitement et prévention [00:29:50][^5^][5] * Suivi somatique et nutritionnel * Psychothérapie et médication adaptée * Importance de la transition entre l'adolescence et l'âge adulte 6. Impact social et culturel [00:37:24][^6^][6] * Influence de la culture et de la société sur les troubles * Nécessité de travailler avec sociologues et anthropologues * Approches adaptées à la diversité culturelle Résumé de la vidéo

      La vidéo aborde la psychologie de l'alimentation et son lien avec les troubles alimentaires. Elle explique comment les processus psychologiques tels que la récompense, l'inhibition et l'interoception influencent les choix alimentaires et l'apport alimentaire. La présentation souligne que ces processus peuvent être perturbés, menant à des comportements alimentaires désordonnés, qui peuvent varier de légers écarts par rapport à la normale à des troubles alimentaires cliniquement diagnostiqués. L'obésité et les troubles du spectre de la boulimie sont associés à une faible maîtrise cognitive et à une forte réactivité aux récompenses alimentaires, tandis que les troubles restrictifs comme l'anorexie nerveuse montrent le contraire.

      Points saillants : 1. Processus psychologiques et alimentation [00:43:00][^1^][1] * Importance des processus de récompense, d'inhibition et d'interoception * Impact sur l'apport alimentaire et les choix alimentaires * Perturbation menant à des troubles alimentaires 2. Troubles alimentaires et obésité [00:44:00][^2^][2] * Prévalence élevée des troubles alimentaires * Liens avec l'obésité et les troubles alimentaires diagnostiqués * Effets sur la qualité de vie et la mortalité 3. Contrôle de l'appétit et récompense alimentaire [00:50:01][^3^][3] * Rôle des signaux homéostatiques et des processus psychologiques * Distinction entre aimer et vouloir de la nourriture * Variabilité de la réactivité aux récompenses alimentaires 4. Contrôle cognitif et troubles alimentaires [00:58:18][^4^][4] * Importance du contrôle cognitif dans les comportements alimentaires * Faible contrôle cognitif lié à la suralimentation et à la boulimie * Contrôle cognitif accru dans les troubles restrictifs 5. Interoception et troubles alimentaires [01:05:23][^5^][5] * Sensibilité aux signaux corporels de faim et de satiété * Rôle de l'insula dans l'intégration des signaux interoceptifs * Variabilité de l'interoception dans les troubles alimentaires

    1. many hospitals started moving their dcom infrastructures to the cloud because it's cheaper it's easier it's faster it's good so they did the shift and they used the Legacy protocol dcom without sufficient security

      Protocol intended for closed networks now found on open cloud servers

    1. The three biggest universal cloud serviceproviders (CSPs) operating in the EU – Google,Amazon and Microsoft – have a combinedmarket share of 70 per cent. Europeanalternatives to these American CSPs – alsoknown as hyperscalers – are limited, both innumber and in scale

      GAM have 70% market share in Europe wrt cloud (though the relevant customer groups seems to be govs here, how is their market share there?) Says EU alternatives are limited in number and scale but does not mention any. Are some mentioned further on? Also note that EC is not aiming for EU universal cloud services but very deliberately for federated cloud and services. So this comparison may not be al that useful, as the EU will not seek to have EU based GAM style providers, but will seek to make GAM style providers too unwieldy to be relevant.

    2. European governments have thus been pushingfor reduced reliance on China’s Huawei forcritical parts of telecommunication networks inthe shift from 4G to 5G networks.

      The article calls 'a 5g moment' the moment of realisation that dependencies in a tech may erode strategic position, by letting critical infrastructure to be controlled by tech firms that can be influenced by other governments. In the case of 5g it's Huawei and Chinese gov, in the case of cloud it's GAM and US gov. This is not a new notion, it is why the EU digital and data legal framework was created the past 4 yrs, so why this paper now?

    3. Cloudsovereignty requires quality technology, but also trust, security and diversification – threeelements that are not necessarily ensured by the current American offers

      DMA level cloud services in 3rd countries provide reliable technology but do not bring trust, security and diversification at a level needed for cloud sovereignty

  6. Feb 2024
    1. We’ve (painstakingly) manually reviewed 310 live MLOps positions, advertised across various platforms in Q4 this year

      They went through 310 role descriptions and, even though role descriptions may vary significantly, they found 3 core skills that a large percentage of MLOps roles required:

      📦 Docker and Kubernetes 🐍 Python 🌥 Cloud

  7. Jan 2024
    1. LocalStack is a cloud service emulator that runs AWS services solely on your laptop without connecting to a remote cloud provider .

      https://www.localstack.cloud/

  8. Nov 2023
    1. 36% of Salesforce customers that have bought other companies’ cloud products – like Service Cloud, Sales or Marketing Cloud – have also purchased Community Cloud. In addition to that, 21% of respondents intend to purchase Community Cloud in the very near future. If this is true, more than 50% of the most active Salesforce customers will use Community Cloud actively for their business needs very soon. And all of that within two years of the product launch!

      These numbers suggest a growing preference for Community Cloud among Salesforce's most active user base, so that underscores a substantial opportunity for businesses to enhance their Salesforce experience through Community Cloud integration.

    1. For all data to be in one place and for effective management of all the processes, as well as streamlined idea implementation, the Idea Management System is created. This is a digital platform designed to improve the process of generating, evaluating, and tracking ideas. It provides a centralized location where users can submit their ideas, and where managers can review and evaluate them. IMS tools also provide valuable analytics, allowing managers to track the progress of ideas and identify areas for improvement.

      Similar to the Idea Management System (IMS) discussed earlier, the Partner Portal serves as a centralized digital platform. Its purpose is to enhance the partner program by providing a unified space for partners to engage, collaborate, and access resources.

    1. In a recent report into what tech partners want, 58% of partners cite a lack of communications as a factor in why partnerships don’t reach expectations. Communication challenges include channels and frequency.

      The report highlights issues with channels and frequency as primary communication challenges. Ensuring a streamlined and efficient communication process within the partner portal is crucial for overcoming these hurdles. It's evident that resolving these communication issues is fundamental to fostering successful tech partnerships and optimizing the overall effectiveness of the partner program.

    1. Accessing Ideas in the Salesforce Lightning Experience (LEX) has become increasingly challenging for users, as they are required to switch to Classic in order to access ideas on their online community. This process of switching back and forth between Classic and Lightning is highly inconvenient and poses a significant hurdle for Lightning users.

      The current challenge of accessing ideas in Salesforce Lightning Experience (LEX) hinders this objective, forcing users to inconveniently switch to Classic. This disrupts the flow of feedback, slowing down the process and creating hurdles for Lightning users. To optimize the system, it's essential to streamline idea access within Lightning, eliminating the need for users to switch back and forth.

    1. For search engines to find your site, it must be at the root level. If your site is not at the root level, meaning it has a URL prefix, you need to create a root site and submit the sitemap accordingly. This is crucial for proper indexing. Please note, The address for a root-level site has the format https://site_URL.

      If your site has a URL prefix, it's crucial to create a root site and submit the sitemap accordingly. This step is pivotal in facilitating proper indexing. Take the time to establish a root-level site to enhance your SEO strategy, making it easier for search engines to discover and rank your Salesforce Experience Cloud site. This small but critical adjustment will significantly boost your site's overall search engine performance.

    1. For an example of an unintended consequence, let’s say the result of your optimization project is spare capacity at a cloud provider. Then that capacity is then offered on the spot market, which lowers the spot price, and someone uses those instances for a CPU intensive workload like crypto or AI training, which they only run when the cost is very low. The end result is that capacity could end up using more power than when your were leaving it idle, and the net consequence is an increase in carbon emissions for that cloud region.

      This is because capacity, utilisation and power used are related, but different concepts.

      Your capacity, which you share back to the pool is then avaiable to someone else, who ends up buying it to use for a task that has a higher average utilisation, resulting in more power being used in absolute terms, even if less is attributed to you.

      This also raises the question - who is responsible - the customer for making the capacity available, for the cloud provider who accepts this workload?

      The cloud providers get to set the terms of service for using their platform.

    1. Salesforce provides a set of cloud-based resources so you can build your own applications and websites easily, cheaply, and fast. This is where Salesforce Experience Cloud comes in. Experience Cloud allows you to create branded sites connected to your CRM without writing code, thereby addressing different purposes and achieving multiple online objectives. Eliminate worries about which infrastructure, operating systems, or development and deployment tools to use. With Experience Cloud, you have everything you need under one roof.

      It's a user-friendly and cost-effective solution, not only from a development perspective but also makes it easier to deploy changes like adding site languages, access permissions etc.

    1. If you want to give your site members access to your Content Libraries, you can use the Libraries component, which is available in templates such as Customer Service, Build Your Own (Aura), Partner Central, and Customer Account Portal. Once the component is added, site members can view and open the libraries they have access to, either in a list view or a tile view.

      I've been exploring Salesforce Experience Cloud recently, and it's great to know that I can easily grant site members access to our Content Libraries using the Libraries component. This feature makes it easier to access the resources they need.

  9. Oct 2023
    1. With Salesforce Experience Cloud Builder, you can create custom online spaces for a variety of business processes without any coding required.

      Creating a page with Salesforce Experience Builder is straightforward:

      Go to the Pages menu and click 'New Page.' Choose between standard or object page types. Select a layout, pre-configured or custom. Drag and drop components onto the page. Add components from the AppExchange if needed. With Salesforce Experience Builder, you can create custom online spaces without coding.

    1. Follow these step-by-step instructions to enable Global Search in Salesforce Experience Cloud:

      To enable Global Search in Salesforce Experience Cloud:

      In Experience Builder, go to the Pages menu and search for 'Search.' Delete the standard Search Results component. Drag and drop the Global Search Results component onto the canvas. Customize search results by adding searchable objects. Save, publish, and test your Global Search. Global Search streamlines the search process and improves productivity for community users.

  10. Sep 2023
    1. Kamatera is a very good option to run a mail server because They don’t block port 25, so you can send unlimited emails (transactional email and newsletters) without spending money on SMTP relay service. Kamatera doesn’t have any SMTP limits. You can send a million emails per day. The IP address isn’t on any email blacklist. (At least this is true in my case. I chose the Dallas data center.) You definitely don’t want to be listed on the dreaded Microsoft Outlook IP blacklist or the spamrats blacklist. Some blacklists block an entire IP range and you have no way to delist your IP address from this kind of blacklist. You can edit PTR record to improve email deliverability. They allow you to send newsletters to your email subscribers with no hourly limits or daily limits, whatsoever. You can order multiple IP addresses for a single server. This is very useful for folks who need to send a large volume of emails. You can spread email traffic on multiple IP addresses to achieve better email deliverability.
    1. By incorporating the Navigation Menu component into your Salesforce Experience Cloud website, you can expand the scope of your navigation beyond conventional topics. This component allows you to include a diverse range of items in your navigation menu, such as Salesforce objects, topics, pages within your site, external URLs, and menu labels

      A well-structured navigation menu can significantly enhance user experience, making it easier for users to find the information they need and interact with the community.

    1. Custom Labels that are installed as part of a managed package cannot be edited or deleted due to the restrictions of managed packages. However, you can still override the existing translations for these labels.

      This flexibility ensures that businesses can tailor the platform to their specific linguistic needs, even within the constraints of managed packages.

  11. Aug 2023
    1. MFA Salesforce setup is not mandatory for your company’s Experience Cloud sites, employee communities, help portals, or e-commerce sites/storefronts. You have the flexibility to choose whether or not to activate MFA for Salesforce Experience Cloud external users accessing these sites. External users can be identified based on the following types of licenses:

      Implementing MFA is a proactive approach to ensuring that even if a password is compromised, unauthorized access can still be prevented.

    1. Salesforce Experience Cloud serves as a comprehensive platform that enables you to create various digital experiences, such as partner portals, volunteer communities, support portals, customer communities, and more. It’s a space that empowers your users to stay up-to-date with the latest information, access valuable resources, communicate with each other, provide feedback, or contact you to resolve their issues. Therefore, utilizing such an environment to host events will undoubtedly revolutionize the way they are managed and experienced.

      Interesting, should investigate this later, must be worthwhile

  12. Mar 2023
    1. El texto hace referencia a una convocatoria de huelga que se llevará a cabo el 8 de marzo contra el régimen de la nube, entendido como el poder computacional centralizado gestionado por grandes empresas tecnológicas como Amazon, Google y Microsoft. La huelga está convocada por una serie de proyectos autogestionados, organizaciones culturales, empresas privadas y otras constelaciones. En ella se pretende experimentar con la reducción al mínimo del uso de aplicaciones basadas en la nube, debatir las implicaciones del régimen de la nube, documentar el agotamiento de los recursos comunitarios por parte de la infraestructura de las grandes tecnologías, soñar con métodos alternativos de supervivencia exuberante y alegre, e imaginar redes locales para modos transnacionales de comunicación y funcionamiento en solidaridad transversal. El texto también proporciona información sobre la naturaleza y los riesgos del régimen de la nube y las razones detrás de la convocatoria de la huelga.

  13. Dec 2022
  14. Nov 2022
  15. Oct 2022
    1. It's like paying a quarter of your house's value for earthquake insurance when you don't live anywhere near a fault line.

      What paying for cloud in some scenarios really is

    2. The second is when your load is highly irregular. When you have wild swings or towering peaks in usage.

      2nd great use of cloud services

    3. The cloud excels at two ends of the spectrum, where only one end was ever relevant for us. The first end is when your application is so simple and low traffic that you really do save on complexity by starting with fully managed services.

      1st great use of cloud services

  16. Aug 2022
    1. 新增了一个refresh范围的scope,同样用了一种独特的方式改变了Bean的管理方式,使得其可以通过外部化配置(.properties)的刷新,在应用不需要重启的情况下热加载新的外部化配置的值。
    1. @EnableBinding:将定义通道的接口绑定到某个 Bean 以便于我们可以通过该 Bean 操作通道进行发送和接收消息。

      注解的参数是class,class实例化成一个bean,通过bean来操作消息

    2. SpringCloud Stream 出生这么久还不广泛流行的原因之一就是,这一套技术体系涉及的东西太多了,万一生产环境出现什么疑难杂症,需要去阅读源码解决的话,这样的技术工作量是很超出预期的。

      缺点

    3. 不得不说集成 SpringCloud Function 之后,消息的发送和接收又迈进了一个崭新的阶段,但 <functionName> + -in- + <index> 这样的配置规约我觉得让我有些难受......甚至目前我认为 3.1 之前被废弃的注解方式也许更适合我们开发使用

      新趋势

    1. 我们看到两个消费者都收到了消息

      每个消费者实例都产生一个匿名的queue

    2. 创建一个消费者组

      实际上是将两个消费者实例绑定同一个queue

    3. 我们需要监听之前创建的通道greetingChannel。让我们为它创建一个绑定

      如果消费者和生产者在同一个实例中,会优先走本地调用,不会产生队列消息。消费者能正常接收mq消息

    1. A good layperson's overview of one effort to increase cloud albedo to counteract climate change. I think that lowering insolation is somehow missing the point of combatting climate change, but it's a legitimate approach that still needs a lot of research.

      What's particularly good about this article is how it manages to demonstrate how complex the problem is without smothering the reader in technobabble.

  17. Jun 2022
    1. Cloud costs can be up to 5X higher than traditional on-premise infrastructure. And that while the cloud promise is so beautiful. What is going on? This article gives you more insight into the other side of the coin and shows you that the cloud promise is not the full story.

      Cloud costs are 5X higher than on-premise costs

      Cloud costs can be up to 5X higher than traditional on-premise infrastructure. And that while the cloud promise is so beautiful. What is going on? This article gives you more insight into the other side of the coin and shows you that the cloud promise is not the full story.

  18. May 2022
  19. Mar 2022
    1. Selon une étude du Fafiec en 2014, le logiciel libre joue un rôle de plus en plus important dans l'évolution des métiers du numérique. Et l'évolution du marché vers le Cloud Computing ne fait que renforcer ce modèle de développement.

      Logiciel libre important pour le developpement du cloud (attention date de 2014)

    1. The flotsam and jetsam of our digital queries and transactions, the flurry of electrons flitting about, warm the medium of air. Heat is the waste product of computation, and if left unchecked, it becomes a foil to the workings of digital civilization. Heat must therefore be relentlessly abated to keep the engine of the digital thrumming in a constant state, 24 hours a day, every day.

      "Cloud Computing" has a waste stream, and one of the waste streams is heat exhaust from servers. This is a poetic description of that waste stream.

  20. Feb 2022
    1. Linked Data bezieht sich dabei auf die technische Aufbereitung der Daten, so dass eine Verknüpfung (Linking) der Daten möglich ist. Das dabei zum Einsatz kommende Datenmodell ist RDF, das ursprünglich für das Semantic Web entwickelt wurde.
    1. The cloud advantage was one of the main pillars upon which the Stadia business was built, and there just isn't any evidence that this theoretical benefit is working to Google's benefit in real life.

      Has better latency != can have better latency. If there's demand for Stadia I assume they could use more of those data centers. But not sure the performance of Stadaia is the problem here, it's far far easier to use Stadia than Gefore NOW. Yet, people don't use it.

    2. "The fundamental benefit of our cloud-native infrastructure is that developers will be able to take advantage of hardware and power in ways never before possible, and that includes taking advantage of the power of multiple GPUs at once."

      Notably, this goal has been stated before, I believe by Microsoft for the Xbox 360? Running demanding workloads in the cloud elastically makes a lot more sense than buying hardware you rarely use.

    3. For Nvidia, the speed of the 3080 package makes for a solid sales pitch: This cloud PC is probably faster than your home system, so cloud gaming is worth it. Cloud gaming will always present a latency tradeoff, but that latency is easier to accept if you're getting otherwise-unattainable graphics quality along with it.

      Smart strategy by Nvidia.

    1. bbil-dung 2.8 zeigt einen Überblick über die sogenannte „Linking Open Data Cloud“

      Abbildung

    Tags

    Annotators

  21. Jan 2022
    1. According to Wilder, a cloud-native application is any application that was architected to take full advantage of cloud platforms. These applications: Use cloud platform services. Scale horizontally. Scale automatically, using proactive and reactive actions. Handle node and transient failures without degrading. Feature non-blocking asynchronous communication in a loosely coupled architecture.

      Cloud-native applications

    1. the existing Wikimedia Cloud Services computing infrastructure (virtual private server (VPS)), the Toolforge hosting environment (platform as a service (PaaS))

      Wikimedia Cloud VPS is Infrastructure as a Service, whereas Toolforge is Platform as a Service.

      As explained in this article, PaaS is further away from on-premises than IaaS, and the user does not have to manage the operative system, middleware and runtimes.

    1. He said the new AI tutor platform collects “competency skills graphs” made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interest

      I saw an AI Tutor demo as ASU+GSV in 2021 and it was still early stage. Today, the features highlighted here are yet to be manifested in powerful ways that are worth utilizing, however, I do believe the aspirations are likely to be realized, and in ways beyond what the product managers are even hyping. (For example, I suspect AI Tutor will one day be able to provide students feedback in the voice/tone of their specific instructor.)

  22. Dec 2021
    1. Edge computing is an emerging new trend in cloud data storage that improves how we access and process data online. Businesses dealing with high-frequency transactions like banks, social media companies, and online gaming operators may benefit from edge computing.

      Edge Computing: What It Is and Why It Matters0 https://en.itpedia.nl/2021/12/29/edge-computing-what-it-is-and-why-it-matters/ Edge computing is an emerging new trend in cloud data storage that improves how we access and process data online. Businesses dealing with high-frequency transactions like banks, social media companies, and online gaming operators may benefit from edge computing.

  23. Nov 2021
    1. Do you have a high-quality and almost irresistible application in your bag? Your potential customers will not enjoy your app to the full if they cannot access it easily and quickly. That is why you need to consider how to choose the right SaaS hosting provider carefully. In this article, we will review different SaaS cloud hosting options and their strengths and weaknesses. Read on to find out how to make hosting for your SaaS application reliable, cost-effective, and scalable.

      Do you have a high-quality and almost irresistible application in your bag? Your potential customers will not enjoy your app to the full if they cannot access it easily and quickly. That is why you need to consider how to choose the right SaaS hosting provider carefully.

      In this article, we will review different SaaS cloud hosting options and their strengths and weaknesses. Read on to find out how to make hosting for your SaaS application reliable, cost-effective, and scalable.

    1. 10 Best SaaS Startups in 2022 for Your InspirationDmitryCEOStartupSaaSHomeBlogEntrepreneurship10 Best SaaS Startups in 2022 for Your InspirationPublishedJul 29, 2020UpdatedNov 5, 202111 min readToday, the SaaS industry is gaining momentum. According to research, 80% of businesses already use at least one SaaS application. Hence, building a SaaS company is currently a skyrocketing business idea. To help you find inspiration and launch the best SaaS startup ever, in this article you will find 10 great examples of SaaS startups you can learn from. All of them produce valuable and fast-growing products for now. Likewise, Growthlist and AngelList marked them as promising SaaS startups of 2021-2022. Without further ado, let’s take a closer look at them.

      Today, the SaaS industry is gaining momentum. According to research, 80% of businesses already use at least one SaaS application. Hence, building a SaaS company is currently a skyrocketing business idea.

      To help you find inspiration and launch the best SaaS startup ever, in this article you will find 10 great examples of SaaS startups you can learn from. All of them produce valuable and fast-growing products for now. Likewise, Growthlist and AngelList marked them as promising SaaS startups of 2021-2022.

      Without further ado, let’s take a closer look at them.

  24. Oct 2021
    1. Adobe Audition: Digital audio workstation software.

      I already have Creative Cloud. I might as well use Adobe Audition.

    1. Leading SaaS Trends for 2021 You Shouldn’t MissDmitryChief Executive OfficerSaaSTrendsHomeBlogTechnologyLeading SaaS Trends for 2021 You Shouldn’t MissDec 1, 202012 min readNowadays, different kinds of businesses are extensively moving to the cloud. O’Reilly reports that 88% of the respondent companies had used cloud services before lockdown and expect their further growth by Q2 2021. Therefore, SaaS application development looks also like a profitable venture today. Yet, to stay afloat in the cloud arena, you need to arm your offering with precise technology and fresh tools. In other words, you need to keep your eye on the future of SaaS. To help you deploy the promptest cloud solutions, in this post, I collected the top SaaS trends for 2021.

      SaaS application development looks also like a profitable venture today. Yet, to stay afloat in the cloud arena, you need to arm your offering with precise technology and fresh tools. In other words, you need to keep your eye on the future of SaaS.

      To help you deploy the promptest cloud solutions, in this post, I collected the top SaaS trends for 2021.

  25. Sep 2021
    1. Top 8 SaaS Pricing Models: Ultimate Guide for 2021Alina NechvolodE-Commerce & SaaS StrategistSaaSHomeBlogEntrepreneurshipTop 8 SaaS Pricing Models: Ultimate Guide for 2021Oct 28, 202016 min readThere’s hardly a thing that impacts your software-as-a-service product revenue more than SaaS pricing models. Still, for many companies choosing the right monetization strategy is no easy feat. To shed some light on this matter, we have prepared a detailed guide on the most popular SaaS pricing strategies. You will find out the pros and cons of each option and learn how to adopt them properly from well-known SaaS companies. Finally, we will discuss the required steps to take when choosing between different SaaS business models.

      There’s hardly a thing that impacts your software-as-a-service product revenue more than SaaS pricing models. Still, for many companies choosing the right monetization strategy is no easy feat.

      To shed some light on this matter, we have prepared a detailed guide on the most popular SaaS pricing strategies. You will find out the pros and cons of each option and learn how to adopt them properly from well-known SaaS companies.

      Finally, we will discuss the required steps to take when choosing between different SaaS business models.

  26. Aug 2021
    1. SaaS vs PaaS vs IaaS: Choosing the Best Cloud Computing ModelAlina NechvolodE-Commerce & SaaS StrategistSaaSHomeBlogTechnologySaaS vs PaaS vs IaaS: Choosing the Best Cloud Computing ModelJun 12, 202011 min readThe usage of cloud computing has long been a standard practice for businesses. More and more companies harness the power of the software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) models. Thus, they can save on hardware and protect their sensitive information from hacking and internal data theft. In this article, we discuss the SaaS vs PaaS vs IaaS models and define their principal differences. What are the core parameters for comparison? They include primary characteristics, usage, the main benefits, and drawbacks.

      The usage of cloud computing has long been a standard practice for businesses. More and more companies harness the power of the software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) models. Thus, they can save on hardware and protect their sensitive information from hacking and internal data theft.

      In this article, we discuss the SaaS vs PaaS vs IaaS models and define their principal differences. What are the core parameters for comparison? They include primary characteristics, usage, the main benefits, and drawbacks.

  27. Jul 2021
    1. It’s hard to imagine now that only some decades ago people had to purchase and download software for all possible needs on their PCs. No matter the task, every new tool took some space on the hard drive and every program update was a nightmare. Why? Because it needed extra space and pretty much of your time and effort if something went wrong. Those days are gone thanks to cloud computing and SaaS products.These software as a service tools have changed the way we work and use software for good. As per recent forecasts, this market is going to reach $623 billion by 2023. So let’s find out what are SaaS perspectives in the near future, why it’s better to start off with a SaaS minimum viable product, and how to create a SaaS MVP.

      Every new piece of software took the space on a hard drive. The update usually took us several hours. Gladly, these days are in the past. The forecasts predict that the SaaS market will reach $623 billion by 2023. We have dived deep into the SaaS domain and created a guide on how to start with a cloud website.

  28. Jun 2021
  29. May 2021
  30. Apr 2021
  31. Mar 2021
    1. Patricio R Estevez-Soto. (2020, November 24). I’m really surprised to see a lot of academics sharing their working papers/pre-prints from cloud drives (i.e. @Dropbox @googledrive) 🚨Don’t!🚨 Use @socarxiv @SSRN @ZENODO_ORG, @OSFramework, @arxiv (+ other) instead. They offer persisent DOIs and are indexed by Google scholar [Tweet]. @prestevez. https://twitter.com/prestevez/status/1331029547811213316

  32. Feb 2021
    1. Dev Environments Built for the Cloud.

      Free

      • 50 hour/month
      • Public Github projects

      Reference #1

  33. Dec 2020
    1. In general, for smaller businesses like startups, it is usually cheaper and better to use managed cloud services for your projects. 

      Advice for startups working with ML in production

    2. This has its pros and cons and may depend on your use case as well. Some of the pros to consider when considering using managed cloud services are:

      Pros of using cloud services:

      • They are cost-efficient
      • Quick setup and deployment
      • Efficient backup and recovery Cons of using cloud services:
      • Security issue, especially for sensitive data
      • Internet connectivity may affect work since everything runs online
      • Recurring costs
      • Limited control over tools
  34. Oct 2020
    1. There is a day when that day will come. There are only so many people on Earth and only so many hours a day to fiddle around with our phones, and at some point, the big clouds and hyperscalers of the world will constitute the majority of compute capacity and the rapacious need for capacity will abate.

      as fine a soliloquy to utility/grid compute & the folding down of the personal-computing age as has been written

      i keep hoping mankind will find it interesting to steal back this fire from the gods yet again, for their own ends

    1. cloud

      I noticed how Collins use ‘cloud’ constantly in this narrative, which perhaps foreshadowing a bad omen to come.

  35. Sep 2020
    1. the internet remembers

      wow uh, ironic. "software's long term cost" ad from GCP. from a company that only supports phones for 3 years, & which is notorious for deprecating services, this is probably not what you want to remind people of- that this cloud provider will pull the plug rather than pay upkeep cost of their services.

    1. This impacts monetization and purchasing at companies. Paying for a new design tool because it has new features for designers may not be a top priority. But if product managers, engineers, or even the CEO herself think it matters for the business as a whole—that has much higher priority and pricing leverage.

      If a tool benefits the entire team, vs. just the designer, it becomes an easier purchase decision.

  36. Jul 2020
    1. A key strength of OnlyOffice is its cloud-based storage options, which let you connect your Google Drive, Dropbox, Box, OneDrive, and Yandex.Disk accounts.
  37. Jun 2020
    1. The section of code with exports.app = functions.https.onRequest(app); exposes your express application so that it can be accessed. If you don't have the exports section, your application won't start correctly
    2. Firebase Functions enables you to use the ExpressJS library to host a Serverless API. Serverless is just a term for a system that runs without physical servers. This is a bit of a misnomer because it technically does run on a server, however, you’re letting the provider handle the hosting aspect
    1. Just as journalists should be able to write about anything they want, comedians should be able to do the same and tell jokes about anything they please

      where's the line though? every output generates a feedback loop with the hivemind, turning into input to ourselves with our cracking, overwhelmed, filters

      it's unrealistic to wish everyone to see jokes are jokes, to rely on journalists to generate unbiased facts, and politicians as self serving leeches, err that's my bias speaking

  38. Apr 2020
    1. et on peut ainsi éviter de s’inquiéter, par exemple

      au contraire, dans l'imaginaire social, je remarque qu'on se méfie de plus en plus «du cloud» (sans pour autant être proactif…) – l'exemple des photos leakées est probablement celui qui circule le plus.

  39. Mar 2020
    1. 10,000 CPU cores

      10,000 CPU cores for 2 weeks Question:

      1. where can we find 10,000 CPU cores in China? AWS? Ali? Tencent?

    Tags

    Annotators

  40. Jan 2020
    1. Lightning Platform allows you to build employee-facing apps to customize and extend your Salesforce CRM. With Heroku you can go even further, building pixel-perfect applications for your customers in open-source languages like Java, Ruby, Python, PHP, JavaScript, and Go.

      Lightning: internal apps, employee-facing<br> Heroku: non-internal open-source apps, taking advantage of organization infrastructure and data

      Of course, if not all employees are Salesforce users/licensed, then a "non-internal" app could still be an organizational app, but without having to secure expensive Salesforce licenses. Also, if one wanted to build an external "customer" engagement community it could be done without needing a Salesforce Community and associated community licenses.

  41. Dec 2019
  42. Oct 2019
    1. the CMfg paradigm and concept provides a collaborative network environment (the Cloud) where users can select the suitable manufacturing services from the Cloud and dynamically assemble them into a virtual manufacturing solution to execute a selected manufacturing task

      Cloud Computing in SCM

  43. Aug 2019
    1. Unfortunately, as I understand, AWS Event Bridge does not support CloudEvents for the time being. Though of course it may change. For example Azure Event Grid suppports CloudEvents, beside its own proprietary schema.

      Cloud Events as a standard/convention for multi cloud triggers. I didn't know this existed.

  44. Mar 2019
    1. DBA Por Acaso: RDS, MySQL e Tuning para Iniciantes

      Outro assunto que não é explicitamente coberto por nossos tópicos, mas que é base importante para o administrador de sistemas na nuvem - e aqui coberto em um nível introdutório, para não assustar ninguém. E procura por Cloud em https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1 para ver como esse assunto é importante!

    2. Repositorio NPM privado grátis com Verdaccio e AWS

      Excelente para você entender, na prática, sobre Cloud Deployment (um de nossos importantes subtópicos!). Além disso, vai sair da palestra com mais ferramentas para seu cinto de utilidades!

  45. Dec 2018
    1. To return data after an asynchronous operation, return a promise.

      I just struggled for a few hours trying to resolve some errors about CORS and 500 INTERNAL errors.

      This line is a bit misleading.

      In trying to get things up and running, I was returning a simple JSON object rather than a Promise. According to what I found, even if you don't have any async work, you still need to return a Promise from your function.

  46. Sep 2018
    1. This page will cover creating a new box in Vagrant Cloud and how to distribute it to users. Boxes can be distributed without Vagrant Cloud, but miss out on several important features.

  47. May 2018
    1. One of the largest-scalestudies exploring this problem was undertaken at the University of Wash-ington (Fidel et al., 2000), where researchers investigated the information-seeking behavior of teams from two different companies, Boeing andMicrosoft (Poltrock et al., 2003). They found that each team had differentcommunication and information-seeking practices, and that current infor-mation systems are oriented toward individual rather than collaborativeinformation-seeking activities. In practice, though, information seeking isoften embedded in collaboration

      SBTF uses Google Sheets and Docs for information collection and shared documentation. Though Google products are billed as cloud-computing collaboration tools, it would be interesting to know if these systems remain oriented in individual information-seeking activities rather than collaborative.

  48. Apr 2018
    1. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.

      Cold servers and cold containers is a term I haven't heard before, but it sums up the waste of excess capacity nicely

  49. Nov 2017
    1. Figure 4: Typical diurnal cycle for traffi c in the Internet. The scale on the vertical axis is the percentage of total users of the service that are on-line at the time indicated on the horizontal axis. (Source: [21])

      I can't see an easy way to link to this graph itself, but this reference should make it easier to get to this image in future

  50. Sep 2017
  51. May 2017
  52. Apr 2017
    1. a personal computer to a server or onto theInternet. File distribution is the point of conjuncture between organismand machine and marks a technology of the self that does not begin withthe individual interior subject but rather with what Doyle calls “inhumanexteriority”

      I feel like there is something to be said about the Cloud ("the Cloud is just somebody else's computer") that could be said to expand on this point and develop it a bit further, but I don't know enough about the Cloud and file sharing to articulate what that might be . . .

  53. Sep 2015
    1. using Cloud based services, despite their creators best intentions has considerable risks sinceAny Cloud based approach creates massive hugely attractive fragile Honeypots
  54. Aug 2015
    1. Shared information

      The “social”, with an embedded emphasis on the data part of knowledge building and a nod to solidarity. Cloud computing does go well with collaboration and spelling out the difference can help lift some confusion.

  55. Jun 2015
    1. Apple’s superior position on privacy needs to be the icing on the cake, not their primary selling point.

      Yeah... 'cause apparently no one actually cares...

  56. Jan 2015
    1. Today the move to cloud computing is replicating some of that early rhetoric—except, of course, that companies now reject any analogy with utilities, since that might open up the possibility of a publicly run, publicly controlled infrastructure.

      That's a distinct possibility - if the infrastructure could be built with trust. Keep hoping.

  57. May 2014
    1. Specifically, we explore three key usage modes (see Figure 1): • HPC in the Cloud , in which researchers out - source entire applications to current public and/ or private Cloud platforms; • HPC plus Cloud , focused on exploring scenarios in which clouds can complement HPC/grid re - sources with cloud services to support science and engineering application workflows—for ex - ample, to support heterogeneous requirements or unexpected spikes in demand; and • HPC as a Service , focused on exposing HPC/grid resources using elastic on-demand cloud abstrac - tions, aiming to combine the flexibility of cloud models with the performance of HPC systems

      Three key usage modes for HPC & Cloud:

      • HPC in the Cloud
      • HPC plus Cloud
      • HPC as a Service
  58. Apr 2014
    1. Clouds establish a new division of responsibilities between platform operators and users than have trad itionally e x- isted in computing infrastructure. In private clouds, where all participants belong to the same organization, this cr e- ates new barriers to effective communication and resource usage. In this paper, we present poncho , a tool that i m- plements APIs that enable communication between cloud operators and their users, for the purposes of minimizing impact of administrative operations and load shedding on highly - utilized private clouds.

      Poncho: Enabling Smart Administration of Full Private Clouds

      http://www.mcs.anl.gov/papers/P5024-1013.pdf

    2. One of the critical pieces of infrastructure provided by this system is a mechanism that can be used for load shedding, as well as a way to communicate with users when this action is required . As a building block, load shedding enables a whole host of more advanced r e- source management capabilities, like spot instances, advanced reservations, and fairshare scheduling