资源

利用MongoDB Atlas、Realm和AWS波长的低延迟创新188金宝搏手机客户端安卓下载

5G网络的出现标志着低延迟业务机会的未来增长。无论是游戏、AR/VR、AI/ML等越来越受欢迎的领域,还是自动驾驶车辆或远程手术等更为关键的领域,公司都有机会利用低延迟应用程序服务和连接性。这种通过5G供电的即时通信在很大程度上仍处于初级发展阶段,但客户正在迅速适应其优势。新的最终用户期望意味着后端服务提供商必须满足不断增长的需求。与此同时,业务客户希望能够无缝地部署他们熟悉的、靠近数据源或最终用户的基于云的后端服务。使用MongoDB Realm和AWS Wavelength,您现在可以开发利用5G的低延迟和更高吞吐量的应用程序,并且可以使用您熟悉的相同工具进行开发。下面的博文探讨了AWS波长、MongoDB Atlas和Realm的好处,以及如何设置和使用每项服务,以构建更好的web和移动应用程序并改进用户体验。我们还将介绍一个真实世界的用例,以智能工厂为例。AWS MongoDB Atlas上的MongoDB Atlas&Realm简介是一种面向现代应用的全球云数据库服务。Atlas是在AWS上运行MongoDB的最佳方式,因为作为一种完全管理的数据库即服务,它在业界领先、可靠的AWS基础设施上运行的同时,为世界领先的MongoDB专家减轻了操作、维护和安全方面的负担。MongoDB Atlas使您能够构建高可用性、全球范围内的性能以及符合最严格的安全和隐私标准的应用程序。在AWS上使用MongoDB Atlas时,您可以专注于推动创新和业务价值,而不是管理基础设施。还提供Atlas Search、Realm、Atlas Data Lake等服务,使MongoDB Atlas成为市场上最全面的数据平台。MongoDB Atlas与许多AWS产品无缝集成。单击此处了解有关常见集成模式的更多信息。为什么要使用AWS波长?AWS波长是针对移动边188金宝搏手机客户端安卓下载缘计算应用程序进行优化的AWS基础设施产品。波长区是将AWS计算和存储服务嵌入通信服务提供商(CSP)数据中心的AWS基础设施部署。AWS Wavelength允许美国13个城市以及英国伦敦、东京、日本大阪和韩国大田的客户使用业界领先且熟悉的AWS工具,同时将用户数据移动到离他们更近的地方。将Wavelength与MongoDB灵活的数据模型和用于移动和边缘应用程序的响应性领域数据库相结合,客户可以获得一个熟悉的平台,该平台可以在任何地方运行并可扩展以满足不断变化的需求。为什么要使用领域?Realm的集成应用程序开发服务使开发人员能够轻松地在移动设备和web上构建业界领先的应用程序。Realm具有三个关键功能:跨平台移动和边缘数据库跨平台移动和边缘同步解决方案节省时间的应用程序开发服务1。Mobile&edge database Realm的移动数据库是CoreData和SQLite的开源、开发人员友好的替代方案。有了Realm的开源数据库,移动开发者可以在很短的时间内开发出离线应用。支持的语言包括Swift、C#、Xamarin、JavaScript、Java、ReactNative、Kotlin和Objective-C。Realm的数据库是用灵活的面向对象数据模型构建的,因此学习起来很简单,并且反映了开发人员已经编写代码的方式。因为它是为移动而构建的,所以基于Realm的应用程序是可靠的、高性能的,并且可以跨平台工作。2.移动和边缘同步解决方案Realm sync是一种现成的同步服务,可在设备、最终用户和后端系统之间实时更新数据。它消除了使用REST的需要,简化了离线第一个应用程序架构。使用同步备份用户数据,构建协作功能,并在设备联机时保持数据最新,而无需担心冲突解决或网络代码。图2:在移动应用程序中实现Realm的高级体系结构由客户端的Realm mobile和Edge数据库以及后端的MongoDB Atlas提供支持,Realm针对脱机使用进行了优化,并能够随您扩展。打造一流的应用程序从未如此容易。3.应用程序开发服务通过Realm app development services,您的团队可以花更少的时间集成web应用程序的后端数据,花更多的时间构建创新功能,推动您的业务计划向前发展。服务包括:GraphQL函数触发数据访问控制用户身份验证参考体系结构高级设计术语nology wise, we will be discussing three main tiers for data persistence: Far Cloud, Edge, and Mobile/IOT. The Far Cloud is the traditional cloud infrastructure business customers are used to. Here, the main parent AWS regions (such as US-EAST-1 in Virginia, US-WEST-2 in Oregon, etc) are used for centralized retention of all data. While these regions are well known and trusted, the issue is that not many users or IOT devices are located in close proximity to these massive data centers and internet-routed traffic is not optimized for low latency. As a result, we use AWS Wavelength regions as our Edge Zones. An Edge Zone will synchronize the relevant subset of data from the centralized Far Cloud to the Edge. Partitioning principles are used such that users’ data will be stored closer to them in one or a handful of these Edge Wavelength Zones, typically located in major metropolitan areas. The last layer of data persistence is on the mobile or IOT devices themselves. If on modern 5G infrastructure, data can be synchronized to a nearby Edge zone with low latency. For less latency-critical applications or in areas where the Parent AWS Regions are closer than the nearest Wavelength Zone, data can also go directly to the Far Cloud. Figure 3: High Level Design of modern edge-aware apps using 5G, Wavelength, and MongoDB Smart factory use case: Using Wavelength, MQTT, & Realm Sync Transitioning from the theoretical, let’s dig one level deeper into a reference architecture. One common use case for 5G and low-latency applications is a smart factory. Here, IOT devices in a factory can connect to 5G networks for both telemetry and command/control. Typically signaling over MQTT, these sensors can send messages to a nearby Wavelength Edge Zone. Once there, machine learning and analysis can occur at the edge and data can be replicated back to the Far Cloud Parent AWS Regions. This is critical as compute capabilities at the edge, while low-latency, are not always full-featured. As a result, centralizing many factories together makes sense for many applications as it relates to long term storage, analytics, and multi-region sync. Once data is in the Edge or the Far Cloud, consumers of this data (such as AR/VR headsets, mobile phones, and more) can access this with low-latency for needs such as maintenance, alerting, and fault identification. Figure 4: High-level three-tiered architecture of what we will be building through this blog post Latency-sensitive applications cannot simply write to Atlas directly. Alternatively, Realm is powerful here as it can run on mobile devices as well as on servers (such as in the Wavelength Zone) and provide low-latency local reads and writes. It will seamlessly synchronize data in real-time from its local partition to the Far Cloud, and from the Far Cloud back or to other Edge Zones. Developers do not need to write complex sync logic; instead they can focus on driving business value through writing applications that provide high performance and low latency. For highly available applications, AWS services such as Auto Scaling Groups can be used to meet the availability and scalability requirements of the individual factory. Traditionally, this would be fronted by a load-balancing service from AWS or an open-source solution like HAProxy. Carrier gateways are deployed in each Wavelength zone and the carrier or client can handle nearest Edge Zone routing. Setting up Wavelength Deploying your application into Wavelength requires the following AWS resources: A Virtual Private Cloud (VPC) in your region Carrier Gateway — a service that allows inbound/outbound traffic to/from the carrier network. Carrier IP — address that you assign to a network interface that resides in a Wavelength Zone A public subnet An EC2 instance in the public subnet An EC2 instance in the Wavelength Zone with a Carrier IP address We will be following the “Get started with AWS Wavelength” tutorial located here . At least one EC2 compute instance in a Wavelength zone will be required for the subsequent Realm section below. The high level steps to achieve that are: Enable Wavelength Zones for your AWS account Configure networking between your AWS VPC and the Wavelength zone Launch an EC2 instance in your public subnet. This will serve as a bastion host for the subsequent steps. Launch the Wavelength application Test connectivity Setting up Realm The Realm components we listed above can be broken out into three independent steps: Set up a Far Cloud MongoDB Atlas Cluster on AWS Configure the Realm Serverless Infrastructure (including enabling sync) Write a reference application utilizing Realm 1. Deploying your Far Cloud with Atlas on AWS For this first section, we will be using a very basic Atlas deployment. For demonstration purposes, even the MongoDB Atlas Free Tier (called an M0) suffices. You can leverage the AWS MongoDB Atlas Quickstart to launch the cluster , so we will not enumerate the steps in specific detail. However, the high-level instructions are: Sign up for MongoDB Atlas account at cloud.mongodb.com and then sign in Click the Create button to display the Create New Database Deployment dialog Choose a “Shared” cluster, then choose the size of M0 (free) Be sure to choose AWS as the cloud and here we will be using US-EAST-1 Deploy and wait for the cluster to complete deployment 2. Configuring Realm and Realm Sync Once the Atlas cluster has completed deploying, the next step is to create a Realm Application and enable Realm Sync. Realm has a full user interface inside of the MongoDB Cloud Platform at cloud.mongodb.com however it also has a CLI and API which allows connectivity to CI/CD pipelines and processes, including integration with GitHub. The steps we are following will be a high-level overview of a reference application located here . Since Realm configurations can be exported, the configuration can be imported into your environment from that repository. The high level steps to create this configuration are as follows: While viewing your cluster at cloud.mongodb.com, click the Realm tab at the top Click “Create a New App” and give it a name such as RealmAndWavelength Choose the target cluster for sync to be the cluster you deployed in the previous step Now we have a Realm app deployed. Next, we need to configure the app to enable sync. Sync requires credentials for each sync application. You can learn more about authentication here . Our application will use API Key Authentication.To turn that on: Click Authentication on the left On the Authentication Providers tab, find API Keys, and click Edit Turn on the provider and Save If Realm has Drafts enabled, a blue bar will appear at the top where you need to confirm your changes. Confirm and deploy the change. You can now create an API key by pressing the “Create API Key” button and giving it a name. Be sure to copy this down for our application later as it cannot be retrieved again for security reasons Also, in the top left of the Realm UI there is a button to copy the Realm App ID. We will need this ID and API key when we write our application shortly. Lastly, we can enable Sync. The Sync configuration relies on a Schema of the data being written. This allows the objects (i.e. C# or Node.JS objects) from our application we are writing in the next step to be translated to MongoDB Documents. You can learn more about schemas here . We also need to identify a partition key. Partition keys are used to decide what subset of data should reside on each Edge node or each mobile device. For Wavelength deployments, this is typically a variation on the region name. A good partition key could be a unique one per API key or the name of the Wavelength Region (e.g. “BOS” or “DFW”). For this latter example, it would mean that your Far Cloud retains data for all zones, but the Wavelength zone in Boston will only have data tagged with “BOS” in the _pk field. The two ways to define a schema are either to write the JSON by hand or automatic generation. For the former, we would go to the Sync configuration, edit the Configuration tab, choose the cluster we deployed earlier, define a partition key (such as _pk as a string), then define the rules of what that user is allowed to read and write. Then you must write the schema on the Schema section of the Realm UI. However, it is often easier to let Realm auto-detect and write the schema for you. This can be done by putting the Sync into “Development Mode.” While you still choose the cluster and partition key, you only need to specify what database you want to sync all of your data to. After that, your application written below is where you can define classes, and upon connection to Realm Sync, the Sync Engine will translate the class you defined in your application into the underlying JSON representing that schema automatically. 3. Writing an application using Realm Sync: MQTT broker for a Smart Factory Now that the back-end data storage is configured, it is time to write the application. As a reminder, we will be writing an MQTT broker for a smart factory. IOT devices will write MQTT messages to this broker over 5G and our application will take that packet of information and insert it into the Realm database. After that, because we completed the sync configuration above, our Edge-to-Far-Cloud synchronization will be automatic. It also works bidirectionally. The reference application mentioned above is available in this GitHub repository . It is based on creating a C# Console application with the documentation here . The code is relatively straightforward: Create a new C# Console Application in Visual Studio Like any other C# Console Application, have it take in as CLI arguments the Realm App ID and API Key. These should be passed in via a Docker environment variable later and the values of these were the values you recorded in the previous Sync setup step Define the RealmObject which is the data model to write to Realm Process incoming MQTT messages and write them to Realm The data model for Realm Objects can be as complex as makes sense for your application. To prove this all works, we will keep a basic model: public class IOTDataPoint : RealmObject { [PrimaryKey] [MapTo("_id")] public ObjectId Id { get; set; } = ObjectId.GenerateNewId(); [MapTo("_pk")] public string Partition { get; set; } [MapTo("device")] public string DeviceName { get; set; } [MapTo("reading")] public int Reading { get; set; } } To sync an object, it must inherit from the RealmObject class. After that, just define getters and setters for each data point you want to sync. The C# implementation of this will vary depending on what MQTT Library you choose. Here we have used MQTTNet so we simply create a new broker with MqttFactory().CreateMqttServer() then start this with specific MqttServerOptionsBuilder where we need to define anything unique to your setup such as port, encryption, and other basic Broker information. However, we need to hook the incoming messages with .WithApplicationMessageInterceptor() so that way any time a new MQTT packet comes into the Broker, we send it to a method to write it to Realm. The actual Realm code is also simple: Create an App with App.Create() and it takes in the argument of the App ID which we are passing in as a CLI argument Log in with app.LogInAsync(Credentials.ApiKey()) and the API Key argument is again passed in as a CLI argument from what we generated before To insert into the database, all writes for Realm need to be done in a transaction. The syntax is straight forward: instantiate an object based on the RealmObject class we defined previously then do the write with a realm.Write(()=>realm.Add({message)}) Finally, we need to wrap this up in a docker container for easy distribution. Microsoft has a good tutorial on how to run this application inside of a Docker container with auto-generated Dockerfiles. On top of the auto-generated Dockerfile, be sure to pass in the arguments of the Realm App ID and API Key to the application as we defined earlier. Learning the inner workings of writing a Realm application is largely outside the scope of this blog post. However there is an excellent tutorial within MongoDB University if you would like to learn more about the Realm SDK. Now that the application is running, and in Docker, we can deploy it in a Wavelength Edge Zone as we created above. Bringing Realm and Wavelength together In order to access the application server in the Wavelength Zone, we must go through the bastion host we created earlier. Once we’ve gone through that jump box to get to the EC2 instance in the Wavelength Zone, we can install any prerequisites (such as Docker), and start the Docker container running the Realm Edge Database and MQTT application. Any new inbound messages received to this MQTT broker will be first written to the Edge and seamlessly synced to Atlas in the Far Cloud. There is a sample MQTT random number generator container suitable for testing this environment located in the GitHub repository mentioned earlier. Our smart factory reference application is complete! At this point: Smart devices can write to a 5G Edge with low latency courtesy of AWS Wavelength Zones MQTT Messages written to that Broker in the Wavelength Zone have low latency writes and are available immediately for reads since it is happening at the Edge through MongoDB Realm Those messages are automatically synchronized to the Far Cloud for permanent retention, analysis, or synchronization to other Zones via MongoDB Realm Sync and Atlas What's Next Get started with MongoDB Realm on AWS for free. Create a MongoDB Realm account Deploy a MongoDB backend in the cloud with a few clicks Start building with Realm Deploy AWS Wavelength in your AWS Account

使用MongoDB Atlas和Cogniflare的Customer 360建立客户188金宝搏手机客户端安卓下载的单一视图

成功、持久的商业的关键是了解你的客户。如果您真正了解您的客户,那么您就了解他们的需求,并能够确定在正确的时间以正确的方式向他们交付的正确产品。然而,对于大多数B2C企业来说,由于大量零碎的数据,构建客户的单一视图是一个主要障碍。企业从多个位置的客户收集数据,如电子商务平台、CRM、ERP、忠诚度计划、支付门户、web应用、移动应用等。每个数据集都可以是结构化的、半结构化的或非结构化的、以流形式交付的或需要批处理的,这使得编译已经支离破碎的客户数据变得更加复杂。这导致一些组织采用定制的解决方案,这些解决方案仍然只能提供客户的部分视图。孤立的数据集使得客户服务、目标市场营销和高级分析(如客户流失预测和建议)等运行操作极具挑战性。只有360度了解客户,组织才能深入了解他们的需求、愿望和要求,以及如何满足他们。因此,360度数据的单一视图对于持久的关系至关重要。在本博客中,我们将介绍如何使用MongoDB的数据库和Cogniflare的Calledio customer 360工具构建客户的单一视图。我们还将探索一个关注情绪分析的真实用例。通过Calleido的Customer 360构建单一视图通过Customer 360数据库,组织可以访问和分析各种个人交互和接触点,以构建客户的整体视图。这是通过从许多不同的来源获取数据来实现的。然而,路由和转换这些数据是一个复杂而耗时的过程。许多现有的大数据工具通常与云环境不兼容。这些挑战启发了Cogniflo创建Calleido。图1:Calleido Customer 360用例体系结构Calleido是一个数据处理平台,构建在经过战斗测试的开源工具(如ApacheNIFI)之上。Calleido配备了300多个处理器,可将结构化和非结构化数据从任何地方移动到任何地方。它有助于批量和实时更新,并处理简单的数据转换。关键是,Calleido与Google云无缝集成,并提供一键式部署。它使用GoogleKubernetes引擎根据需求进行上下扩展,并提供了一个直观、流畅的低代码开发环境。图2:CALIDIDO数据管道将客户从PostgreSQL复制到MunGDB,这是一个真实世界的用例:客户电子邮件的情感分析以展示Cogniflare的CaleIDo、MunGdB阿特拉斯和客户360视图的能力,考虑使用客户电子邮件进行情感分析的情况。为了简化Customer 360数据库的构建,Cogniflare的团队创建了流模板,用于在几秒钟内实现数据管道。在接下来的部分中,我们将介绍此Customer 360用例的一些最常见的数据移动模式,并展示一个示例仪表板。图3:示例客户仪表板流程从处理器从电子邮件服务器(ConsumeIMAP)提取IMAP消息开始。每一封新邮件到达所选收件箱(如客户服务),都会触发一个事件。接下来,该过程提取电子邮件标题以确定有关电子邮件内容的主题行详细信息(ExtractEmailHeaders)。Calleido使用发件人的电子邮件识别客户(UpdateAttribute),并通过执行脚本(ExecuteScript)提取完整的电子邮件正文。现在,收集了所有数据后,通过Go188金宝搏手机客户端安卓下载ogle云平台(GCP)发布(也可以使用Kafka)准备并发布消息负载,供下游流和其他服务使用。图4:将电子邮件转换为云PubSub消息来自上一个流的GCP Pub/Sub消息随后被消费(ConsumeGCPPubSub)。当我们验证MongoDB数据库(GetMongo)中的每个发送方时,这就是MongoDB Atlas集成的威力所在。如果我们的系统中存在客户,我们会将电子邮件数据传递给下一个流。其他电子邮件被忽略。图5:使用MongoDB验证客户电子邮件,然后对电子邮件正文副本进行Calleido分析。对于这个流程,我们使用一个处理器来准备一个请求主体,然后发送给谷歌云自然语言AI来评估消息的语气和情绪。然后,来自语言处理API的结果直接进入MongoDB Atlas,这样就可以将它们拉到仪表板中。图6:在仪表板中使用Calleido进行云AutoML调用最终结果:Customer 360数据库可用于内部后台系统,以补充和通知客户支持。通过单一视图,可以更快、更有效地解决问题、处理退货和解决投诉。利用以前客户端convers提供的信息ations ensures each customer is given the most appropriate and effective response. These data sets can then be fed into analytics systems to generate learnings and optimizations, such as associating negative sentiment with churn rate. How MongoDB's document database helps In the example above, Calleido takes care of copying and routing data from the business source system into MongoDB Atlas, the operational data store (ODS). Thanks to MongoDB’s flexible data structure, we can transfer data in its original format, and subsequently implement necessary schema transformations in an iterative manner. There is no need to run complex schema migrations. This allows for the quick delivery of a single view database. Figures 7 & 8: Calleido Data Pipelines to Copy Products and Orders From PostgreSQL to MongoDB Atlas Calleido allows us to make this transition in just a few simple steps. The tool runs a custom SQL query (ExecuteSQL) that will join all the required data from outer tables and compile the results in order to parallelize the processing. The data arrives in Avro format, then Calleido converts it into JSON (ConvertAvroToJSON) and transforms it to the schema designed for MongoDB (JoltTransformJSON). End result in the Customer 360 dashboard: MongoDB Atlas is the market-leading choice for the Customer 360 database. Here are the core reasons for its world-class standard: MongoDB can efficiently handle non-standardized schema coming from legacy systems and efficiently store any custom attributes. Data models can include all the related data as nested documents. Unlike SQL databases, MongoDB avoids complicated join queries, which are difficult to write and not performant. MongoDB is rapid. The current view of a customer can be served in milliseconds without the need to introduce a caching layer. The MongoDB flexible schema model enables agility with an iterative approach. In the initial extraction, the data can be copied nearly exactly as its original shape. This drastically reduces latency. In subsequent phases, the schema can be standardized and the quality of the data can be improved without complex SQL migrations. MongoDB can store dozens of terabytes of data across multiple data centers and easily scale horizontally. Data can be shared across multiple regions to help navigate compliance requirements. Separate analytics nodes can be set up to avoid impacting performance of production systems. MongoDB has a proven record of acting as a single view database, with legacy and large organizations up and running with prototypes in two weeks and into production within a business quarter. MongoDB Atlas can autoscale out of the box, reducing costs and handling traffic peaks. The data can be encrypted both in transit and at rest, helping to accomplish compliance with security and privacy standards, including GDPR, HIPAA, PCI-DSS, and FERPA. Upselling the customer: Product recommendations Upselling customers is a key part of modern business, but the secret to doing it successfully is that it’s less about selling and more about educating. It’s about using data to identify where the customer is in the customer journey, what they may need, and which product or service can meet that need. Using a customer's purchase history, Calleido can help prepare product recommendations by routing data to the appropriate tools such as BigQuery ML. These recommendations can then be promoted through the call center and marketing teams for both online or mobile app recommendations. There are two flows to achieve this: preparing training data and generating recommendations: Preparing training data First, appropriate data from PostgreSQL to BigQuery is transferred using the ExecuteSQL processor. The data pipeline is scheduled to execute periodically. In the next step, appropriate data is fetched from PostgreSQL, dividing it to 1K row chunks with the ExecuteSQLRecord processor. These files are then passed to the next processor which uses load balancing enabled to utilize all available nodes. All that data then gets inserted to a BigQuery table using the PutBigQueryStreaming processor. Figure 9: Copying Data from PostgreSQL to BigQuery with Calleido Generating product recommendations Next, we move on to generating product recommendations. First, you must purchase Big Query capacity slots, which offer the most affordable way to take advantage of BigQuery ML features. Here, Calleido invokes an SQL procedure with the ExecuteSQL processor, then ensures that the requested BigQuery capacity is ready to use. The next processor (ExecuteSQL) executes an SQL query responsible for creating and training the Matrix Factorization ML model using the data copied by the first flow. Next in the queue, Calleido uses the ExecuteSQL processor to query our trained model to acquire all the predictions and store them in a dedicated BigQuery table. Finally, the Wait processor waits for both capacity slots to be removed, as they are no longer required. Figure 10 & 11: Generating Product Recommendations with Calleido Then, we remove old recommendations through the power of two processors. First, the ReplaceText processor updates the content of incoming flow files, setting the query body. This is then used by the DeleteMongo processor to perform the removal action. Figure 12: Remove Old Recommendations The whole flow ends with copying Recommendations to MongoDB. The ExecuteSQL processor fetches and aggregates the top 10 recommendations per user, all in chunks of 1k rows. Then, the following two processors (ConvertAvroToJSON and ExecuteScript) prepare data to be inserted into the MongoDB collection, by the PutMongoRecord processor. Figure 13: Copy Recommendations to MongoDB End result in the Customer 360 dashboard (the data used here in this example is autogenerated): Benefits of Calleido's 360 customer database on MongoDB Atlas Once the data is available in a centralized operational data store like MongoDB, Calleido can be used to sync it with an analytics data store such as Google BigQuery. Thanks to the Customer 360 database, internal stakeholders can then use the data to: Improve customer satisfaction through segmentation and targeted marketing Accurately and easily access compliance audits Build demand planning forecasts and analyses of market trends Reward customer loyalty and reduce churn Ultimately, a single view of the customer enables organizations to deliver the right message to prospective buyers, funneling those at the brand awareness stage into the conversion stage and ensures retention and post sales mechanics are working effectively. Historically, a 360 view of the customer was a complex and fragmented process, but with Cogniflare’s Calleido and MongoDB Atlas, a Customer 360 database has become the most powerful and cost efficient data management stack that an organization can harness.

188金宝搏手机客户端安卓下载MongoDB员工分享出柜故事:2021年全国出柜日

每年10月11日是国家出柜日,在美国得到广泛认可。MongoDB自豪地支持和拥抱全球的LGBTQIA+社区,因此我们将这次庆祝活动重新设想为(国际)国家出柜日。在我们每年庆祝(国际)国家出柜日的传统中,我们请LGBT188金宝搏手机客户端安卓下载QIA+社区的员工分享他们出柜的经历。这些是他们的故事。杰米·伊万诺夫(Jamie Ivanov),升级经理,在我的记忆中,我一直想玩洋娃娃,感觉自己和我的表妹们更亲近。对于一个出生时是男性的人来说,在一个相当保守的家庭中长大,这是相当困难的。在我很小的时候,我就知道我与众不同,但我缺乏一种方式来描述它。我当然没有得到我需要的支持,所以我是作为一个男性被抚养长大的。我父亲不遗余力地“让我成为一个男人”,并以一种并非最有效的方式使我变得坚强。上学的时候,我仍然知道我是不同的,因为我一直觉得自己被两种性别所吸引,但我太害怕承认这一点。我成立了一个LGBT青少年青年组织,为我提供了一个安全的地方,让我做我自己,并向人们承认我的真实身份。那群人之外的人仍然很可怕;我知道我必须直言不讳,否则我会冒被殴打或骚扰的风险,所以我试着把我的怪癖推到一边。在我30多岁的时候,在参军并生了三个孩子之后,我意识到我不能再继续假装了——我不是真正的我。我开始告诉人们我是双性恋,希望他们不会认为我不是一个普通人。我收到的大多数回复都是“是的,我们算了算。”肩膀上的重量减轻了很多,但还是有点不对劲;虽然承认这有助于解释我对谁感兴趣,但它仍然无法解释我是谁。通过一系列的幸运和不幸的事件,我这么多年来建立起来的许多门面都掉了下来,我意识到我是谁与我得到的身体不匹配。和任何人谈论我的感受或我是谁都很可怕,但我最终告诉人们我是一个变性女性。这是我做过的最可怕的事情之一。有些人不明白,我确实为此失去了一些家庭,但大多数人都敞开双臂接受了我的身份!自从忠于自己以来,我的体重减轻了很多,我唯一的遗憾就是没有足够的资源和勇气去承认多年前的我。自从我以双性恋/泛性恋和变性女性的身份出现以来,我建立了更牢固的关系,对自己感到更自在,甚至喜欢自己的照片(这是我一直讨厌的事情,我意识到这是因为它不是真实的我)。当一位MongoDB招聘人员联系我时,我问了他和我问其他招聘人员一样的问题:“MongoDB对LGBT有多友好(强调变性人的部分)?”我从我的技术招聘人员Bryan Spears那里得到的回复是我从任何招聘人员或公司得到的最好的回复,也是我选择在MongoDB工作的决定性因素。以下是他所说的:“MongoDB是一家真正尽最大努力遵循我们的价值观的公司,比如拥抱差异的力量;我们有许多员工认同LGBTQ+或是LGBTQ+社区的盟友。我们还有两个ERG,MongoDB Queeries和UGT(科技领域代表性不足的性别),这两项服务的目的都是为那些被认定为LGBTQ+或质疑的人创造和维护一个安全的环境。从福利的角度来看,我们已经扩大了WPATH护理服务标准的数量,为那些通过信诺认定为变性人、性别不合规者或变性人的人提供服务。虽然我不知道我掌握的任何信息,但是ed告诉你在MongoDB的生活是什么样的,我希望这表明我们正在尽最大努力确保这里的每个人都受到尊重和欢迎。“在以前的一些工作中,我并不总是得到我需要的支持,但MongoDB已经将标准提高到了难以与之竞争的水平。我很高兴终于找到一个真正接受我的地方。Ryan Francis,全球需求生成和现场营销副总裁,在我过去称之为“圣经皮带扣”的90年代成长,我不相信出柜是有可能的。事实上,我会在与人断绝关系后,在晚上熬夜策划我去纽约市的大逃亡(我计划如何支付所说的逃亡费用还不得而知)。然而,我却和我最好的朋友马哈在一起。在我高中二年级和三年级之间的夏天,我和她的家人在埃及度过了一段时间。在返程途中,我买了一本《倡导者》的副本,以了解在我大逃亡之后等待我的巨大的同性恋生活。那个月晚些时候,我母亲在打扫房子时偶然发现了那本杂志。她等了六个月才提出来t one day in January sat me down in the living and asked, “Are you gay?” I paused for a moment and said… “yup.” She started crying and thanked me for being honest with her. A month later, she picked up a rainbow coffee mug at a yard sale and has been Mrs. PFLAG ever since, organizing pride rallies in our little Indiana hometown and sitting on the Episcopal church vestry this year in order to push through our parish’s blessing of same-sex marriage. Needless to say, I didn’t have to escape. My father was also unequivocally accepting. This is a good thing because my sister Lindsay is a Lesbian, so they sure would have had a tough time given 100% of their kids turned out gay. Lindsay is the real hero here who stayed in our homeland to raise her children with her wife, changing minds every day so that, hopefully, there will be fewer and fewer kids who actually have to make that great escape. Angie Byron , Principal Community Manager Growing up in the Midwest in the 80s and 90s, I was always a “tomboy;” as a young kid, I gravitated to toys like Transformers and He-Man and refused to wear pink or dresses. Since we tended to have a lot in common, most of my best friends growing up were boys; I tended to feel awkward and shy around girls and didn’t really understand why at the time. I was also raised both Catholic and Bahá’í, which led to a very interesting mix of perspectives. While both religions have vastly different belief and value systems, the one thing they could agree on was that homosexuality was wrong (“intrinsically immoral and contrary to the natural law” in the case of Catholicism, and “an affliction that should be overcome” in the case of Bahá’í). Additionally, being “out” as queer at that time in that part of the United States would generally get you made fun of, if not the everlasting crap kicked out of you, so finding other queer people felt nearly impossible. As a result, I was in strong denial about who I was for most of my childhood and gave several valiant but ultimately failed attempts at the whole “trying to date guys” thing as a teenager (I liked guys just fine as friends, but when it came to kissing and stuff it was just, er… no.). In the end, I came to the reluctant realization that I must be a lesbian. I knew no other queer people in my life, and so was grappling with this reality alone, feeling very isolated and depressed. So, I threw myself into music and started to find progressively more and more feminist/queer punk bands whose songs resonated with my experiences and what I was feeling: Bikini Kill, Team Dresch, The Need, Sleater-Kinney, and so on. I came out to my parents toward the end of junior high, quite by accident. Even though I had no concrete plan for doing so, I always figured Mom would be the more accepting one, given that she was Bahá’i (a religion whose basic premise is the unity of religions and equality of humanity), and I’d have to work on Dad for a bit, since he was raised Catholic and came from a family with more conservative values from an even smaller town in the midwest. Imagine my surprise when one day, Mom and I were watching Ricky Lake or Sally Jesse Raphael or one of those daytime talk shows. The topic was something like “HELP! I think my son might be gay!” My mom said something off-handed like “Wow, I don’t know what I would do if one of you came out to me as gay...” And, in true 15-year old angsty fashion, I said, “Oh YEAH? Well you better FIGURE IT OUT because I AM!” and ran into my room and slammed the door. I remember Mom being devastated, wondering what she did wrong as a parent, and so on. I told her, truly, nothing. My parents were both great parents; home was my sanctuary from bullying at school, and my siblings and I were otherwise accepted exactly as we were, tomboys or otherwise. After we’d finished talking, she told me that I had better go tell my father, so I begrudgingly went downstairs. “Dad… I’m gay.” Instead of a lecture or expressing disdain, he just said, “Oh really? I run a gay support group at your Junior High!” and I was totally mind blown. Bizarro world. He was the social worker at my school, so this makes sense, but it was the exact opposite reaction that I was expecting. An important life lesson in not prejudging people. When I moved onto high school, we got… drumroll ... the Internet. Here things take a much happier turn. Through my music, I was able to find a small community of fellow queers (known as Chainsaw), including a ton of us from various places in the Midwest. I was able to learn that I was NOT a freak, I was NOT alone, there were SO many other folks who felt the exact same way, and they were all super rad! We would have long talks into the night, support each other through hardships, and more than a few of us met each other in person and hung out in “real life.” Finding that community truly saved my life, and the lives of so many others. (Side-note: This is also how I got into tech because the chat room was essentially one gaping XSS vulnerability, and I taught myself HTML by typing various tags in and seeing how they rendered.) I never explicitly came out to anyone in my hometown. I was too scared to lose important relationships (it turns out I chose my friends well, and they were all completely fine with it, but the prospect of further isolating myself as a teenager was too terrifying at the time). Because of that, when I moved to a whole new country (Canada) and went to college, the very first thing I did on my first day was introduce myself as “Hi, I’m Angie. I’ve been building websites for fun for a couple of years. Also, I’m queer, so if you’re gonna have a problem with that, it’s probably best we get it out of the way now so we don’t waste each others’ time.” Flash forward to today, my Mom is my biggest supporter, has rainbow stickers all over her car, and has gone to dozens of Pride events. Hacking together HTML snippets in a chat room led to a full-blown career in tech. I gleaned a bit more specificity around my identity and now identify as a homoromantic asexual . Many of those folks I met online as a teenager have become life-long friends. And, I work for a company that embraces people for who they are and celebrates our differences. Life is good. Learn more about Diversity & Inclusion at MongoDB Interested in joining MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!