[00:03:48] <ShortWave> I'm using a 2d query (with legacy coordinates, don't ask), and while that part works fine...what I need to do is also return the distance from a given record to a provided point.
[00:17:43] <Ramjet> HOW do I build the client libraries and includes... The scons --full used to do it but it has been DEPRECATED and I do not see any new targets in the SConstruct file that replaces --full ?????
[00:31:04] <Ramjet> HELP!!!!!! HOW do I build the client libraries and includes... The scons --full used to do it but it has been DEPRECATED and I do not see any new targets in the SConstruct file that replaces --full ?????
[08:56:51] <Guest92111> hi, I'm building a restful API on a mongo database, and am having troubles figuring out how to properly implement pagination over a collection, ordered by an updatedAt timestamp, anyone have some experience with this problem?
[08:59:38] <Guest92111> when i paginate by 50 results for example, when retrieving the second page, some documents in the first page might have been already updated, which affect the entire result set, I thougt about solving this by specifying a "since" timestamp which is set to the updatedAt of the last result in my current page, but a problem arises when different documents might have the same updatedAt timestamp
[09:00:44] <jinmatt> so what happens if they have the same timestamp
[09:01:04] <jinmatt> they will be listed in the order you want anyway
[09:01:40] <Nodex> gerryvdm : there is no garunteed way to solve your problem
[09:01:57] <gerryvdm> yeah but say i retrieve 50 documents with the same timestamp, and there are 50 more with that timestamp in the next page, if I ask for all documents "since" the last updated at I keep looping over the first 50
[09:02:18] <Nodex> what if the second 50 got changed while recieving the first 50
[09:02:31] <Nodex> the second 50 is now the first 50 and the 3rd 50 are now the second and so on
[09:02:33] <gerryvdm> I actually was thinking about using an ObjectId as the column to sort on, would that be a solution?
[09:02:53] <Nodex> objectId gives you a asc/desc on insertion time
[09:03:12] <gerryvdm> Nodex: in that case I no longer have a "previous" page but I can still retrieve the next set
[09:03:28] <Nodex> you need a previous page to determine the next page
[09:03:28] <gerryvdm> I dont need the ability to go back a page
[09:03:38] <gerryvdm> i just want to synchronize all changes
[09:03:45] <Nodex> you need to know where you "where" to work out where you're "going"
[09:07:08] <Nodex> you would be submitting the same query over and over again
[09:07:26] <Nodex> what if something changes from page 1 ? - it shifts somethign to page 2
[09:07:41] <gerryvdm> yeah but thats fine, as long as I dont miss any changes
[09:07:44] <Nodex> then something new is in page 1's place ... it's an endless loop
[09:07:55] <Nodex> but you WILL miss changes by that very notion
[09:09:05] <gerryvdm> if i am in the middle of pagination, and i'm at everything updated since midnight last night, if something of a previous page changes right now, i would still get all changes and also my new change in the last page?
[09:09:14] <gerryvdm> wheres the error in my reasoning?
[09:10:19] <Nodex> so you're querying the same query every time?
[09:10:34] <Nodex> you just want the latest 50 documents sorted by update time?
[09:11:06] <gerryvdm> no say in my first page (by 2 results) i get two documents "updatedAt: 11:00, updatedAt: 12:00"
[09:11:31] <jinmatt> is this some kind of logging data that needs to be paginated, as per your description I have a feeling that its never possible to see the end of this list as your data entries keeps changing so fequent
[09:11:39] <gerryvdm> for my next query i need to ask the next 2 documents where updatedAt is greater than 12:00
[09:11:51] <gerryvdm> that works fine as long as updatedAt would be unique
[10:54:28] <jinmatt> mms agent is having problems to connect to ec2 instance after adding firewall security rules, I’m getting the follow error from the Agent Log “Problem collecting blocking data: could not connect to ip-xx-xx-xx-xx:27017: timed out”
[10:55:02] <jinmatt> if I change the rule to access from any IP, the agent works
[10:55:43] <jinmatt> otherwise even through I add the agent host machines’s IP address to access rule list on EC2, its not working
[11:38:40] <jinmatt> rules, I’m getting the follow error from the Agent Log “Problem collecting blocking data: could not connect to ip-xx-xx-xx-xx:27017: timed out”
[13:05:30] <Industrial> I'm trying to figure out what the best way to save my data is in mongodb. The data is time based so I'm using millisecond timestamps for the _id index of the collections, and per timestamp that happened there may be several keys set. What I'd like to do now is to split that into one collection per key kind instead.
[13:06:27] <Industrial> Because not every document may contain every key, when I make queries for certain key kinds, I'm also querying all data that does not contain those keys. I could eliminate that constraint by just querying a database for the specific data I'm looking for.
[13:07:24] <Industrial> With the Node.js driver, does calling `var collection = db.collection('test_insert');` do anything on the database end? no right? so I could potentially call this thousands of times per second?
[13:07:44] <Industrial> (I'll probably cache it anyway, later, though)
[13:08:20] <Industrial> because I'm receiving a LOT of messages, not knowing which collection they should land in.
[13:14:28] <Nodex> Industrial : your "var" might have a local or global scope, depending on where you set it
[13:15:31] <Industrial> Nodex: yeah, but there's notghing going forth and back between mongo and node, trying to 'fetch' a collection right? It just modifies calls made to that object to work on a certain collection rather then another?
[13:16:49] <dexteruk> im new to the world of mongodb, i have a quick question, im looking at the best way to structure my documents, i have a set of dynamic elements but i have read about Embedded Documents that the document can grow which can impact write performance, is this really an issue, and what things should i consider
[13:17:21] <Industrial> dexteruk: try and get soemthing rolling on the system first, and then worry about performance
[13:17:32] <Industrial> its the best approach :) imho, even for testing purposes
[13:20:32] <dexteruk> so its nothing i should really worry about
[13:21:29] <Nodex> dexteruk : you should think about your structure first. This is the exact opposite of other systems
[13:21:55] <Nodex> you should tailor your app to your data and the fastest way to read/write it else you will NEVER get good performance
[13:22:25] <Nodex> Industrial : the driver will communicate when there is something to send to the database
[13:23:16] <dexteruk> yes i have seen this, in some examples, where you put the content and embed the link
[13:55:02] <paper_ziggurat> i got it, balboah. thank you :)
[14:38:38] <ShortWave> Is there a way to get a distance calculation from a 2d query, without doing a calculation myself?
[14:39:47] <Nodex> one of them returns the distance, can''t recall which one that is
[14:47:55] <yug> Does anyone know if this is possible to do aggregation on aggregation with aggregation module? i.e first to aggregate by hours and the result of this I want to aggregate by day
[14:50:49] <rybnik> yug so, grouped by day and hours ?
[14:53:28] <rybnik> yug would something like this do the work {$group: {_id: { h: {$hour: '$tick'}, d: {$dayOfMonth: '$tick'} }, n: {$sum:1}}} ?
[14:55:38] <yug> rybnik no because this will group by both together. I need to group first by hour and the result to group by day
[15:01:52] <abique_> Hi, I'm using the C++ driver, and I wonder how to do a query "find by _id" is it conn.query("mydb", QUERY("_id" << "45645645645689765"));
[15:02:42] <abique_> Also, auto_ptr being deprecated, is it possible to use unique_ptr?
[16:04:46] <palominoz> hello there. given my multilanguage documents look like { id: 'some-mongo-id', locales: { en: {type: '}, it: { name: "pane"}}}
[16:08:06] <palominoz> hello there. given my multilanguage documents look like { id: 'some-mongo-id', locales: { en: {type: 'bread'}, it: { name: "pane"}}}, i would like to support fulltext search based on locale, like db.myCollection.runCommand("text", {search: "pane", language: "italian" }). I did run ensureIndex for each language indexable in the locales subdocument. mongo then stops me with error: too many text index for: production.manuf
[16:08:06] <palominoz> acturers. how would you solve this problem? misspressed enter button before :)
[16:09:01] <palominoz> is it possible to have a single index and then discriminate language on the results?
[16:34:47] <LinkRage> When I use $group in Mongo aggregation the output is array named result. How do I print *only* the inner fields of that array in Mongo CLI?
[17:23:05] <paper_ziggurat> is it possible to iterate over a collection and then return a value where a certain json element is equal to a predetermine value?
[17:23:19] <paper_ziggurat> like return value where value.element = something?
[17:24:25] <paper_ziggurat> well the value is the document already in the collection
[17:24:48] <demo> hey, anyone here works for mongo's nyc offices by any chance?
[17:27:24] <saalaa__> hello, anyone has any experience accessing 'local.system.namespaces' on MongoLab?
[17:28:05] <NaN> paper_ziggurat: so are you looking for "something" or "value"?
[17:28:48] <saalaa__> (specifically I'm trying to install an Elastic Search MongoDB River which needs access to it but doesn't seem to be able to access it:)
[17:30:39] <paper_ziggurat> NaN, I have a key:value pair, I want to iterate over the collection and return the value where one of the value's json elements is equal to something
[17:32:00] <rkgarcia> paper_ziggurat: a query that finds documents in a collection?
[17:48:20] <saalaa__> for those who might skim through the backlog, it turns out MongoDB's *.system.namespaces, which is needed for an Elastic Search River, is not available MongoLab's plan bellow "dedicated" offerings starting at 200 USD
[17:48:32] <saalaa__> the only alternative seems to be self-hosting
[19:03:19] <paper_ziggurat> looking at it right now
[19:03:28] <paper_ziggurat> but actually, i see this selector '$elemMatch'
[19:05:14] <NaN> paper_ziggurat: $elemMatch works on arrays
[19:05:25] <paper_ziggurat> yeah, i just found that out
[19:08:11] <paper_ziggurat> after running the query though, the value that's returned is a big JSON document with details about the collection
[19:33:29] <Guest43094> I’m using 2d indexes in a multiplayer game world to find players near my current position on the map that are online and of a certain type. I’d like to use a compound index for this, but this is obviously only possible with 2dsphere. What I’m not clear about is if the 2dspehere index will return different results than the 2d index for the same location/data.
[20:15:23] <NaN> where can I configure the GMT that new Date (mongo shell) gives
[20:33:58] <slon> I have an unique index on multiple fields within a collection. When I insert a document containing duplicate data for multiple unique fields I only get back one error. Is there a way to get all errors back?
[20:35:28] <slon> I'm only getting back the first duplicate error, but I want to know all fields that are duplicates for the specific entry.
[20:36:18] <slon> like if name and email have unique separate unique indices and I'm adding a name and an email that both already exist, I want to get back errors about both fields. Currently, I'm only getting back errors for one field.
[20:44:50] <slon> Basically, I have a new document and I need to know which fields are duplicates.
[21:16:03] <proteneer> question about the mongo license, if I build mongo from source, then am I allowed to use SSL?
[21:16:11] <proteneer> (without paying for an enterprise license)
[21:21:00] <slon> hey! I have a new document and I want to know which fields will be duplicates when inserted. Is there a way to get all duplicate field errors when inserting?
[21:23:10] <ranman> slon: are you a markov chain or a human?
[21:26:06] <bob_123> hello, I have a question that seems silly but I can't get this to work for me: in a find query, how do you get mongo to match a document where a field value is 0?
[21:29:22] <ranman> proteneer: I think there are legal reasons you can't include certain things around TLS but I could be wrong
[21:41:36] <proteneer> does pymongo need to be rebuilt with ssl support/
[21:41:49] <proteneer> if i rebuild my servers with ssl support?
[21:42:01] <LoneSoldier728> hey is this query wrong User.findByIdAndUpdate(user, {$pullAll: {stories: storiesRemove}, $inc: {storiesCount: -storiesRemove.length}}, because I have an off by 1 error... on the total
[22:14:19] <thearchitectnlog> i need to iterate in the child document and insert the parent object id into it
[22:18:10] <proteneer> dear god why is your build system scons
[22:26:45] <proteneer> btw are most mongo drivers supported to work out of the box with 2.6.0?
[22:26:53] <proteneer> or do the drivers also need an update?
[22:28:31] <ranman> proteneer: scons because of historical reasons, if you're feeling adventurous definitely check out some of the more exotic stuff in the scons file (also we support windows and needed cross platform builds)
[22:28:47] <ranman> proteneer: drivers will need an update to use 2.6 features yes
[22:28:58] <proteneer> it says the wire protocol changed
[22:29:10] <proteneer> does that mean even if I don't need the 2.6 features I will still need an update?
[22:30:51] <ranman> proteneer: no it will still support the old protocol