PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 26th of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:12] <ShortWave> Hi all
[00:03:48] <ShortWave> I'm using a 2d query (with legacy coordinates, don't ask), and while that part works fine...what I need to do is also return the distance from a given record to a provided point.
[00:17:43] <Ramjet> HOW do I build the client libraries and includes... The scons --full used to do it but it has been DEPRECATED and I do not see any new targets in the SConstruct file that replaces --full ?????
[00:22:08] <mylord> don’t parse where? #5: http://cr.yp.to/qmail/guarantee.html
[00:31:04] <Ramjet> HELP!!!!!! HOW do I build the client libraries and includes... The scons --full used to do it but it has been DEPRECATED and I do not see any new targets in the SConstruct file that replaces --full ?????
[01:19:33] <ShortWave> first: Calm down, dammit.
[08:56:51] <Guest92111> hi, I'm building a restful API on a mongo database, and am having troubles figuring out how to properly implement pagination over a collection, ordered by an updatedAt timestamp, anyone have some experience with this problem?
[08:59:38] <Guest92111> when i paginate by 50 results for example, when retrieving the second page, some documents in the first page might have been already updated, which affect the entire result set, I thougt about solving this by specifying a "since" timestamp which is set to the updatedAt of the last result in my current page, but a problem arises when different documents might have the same updatedAt timestamp
[09:00:44] <jinmatt> so what happens if they have the same timestamp
[09:01:04] <jinmatt> they will be listed in the order you want anyway
[09:01:40] <Nodex> gerryvdm : there is no garunteed way to solve your problem
[09:01:57] <gerryvdm> yeah but say i retrieve 50 documents with the same timestamp, and there are 50 more with that timestamp in the next page, if I ask for all documents "since" the last updated at I keep looping over the first 50
[09:02:18] <Nodex> what if the second 50 got changed while recieving the first 50
[09:02:31] <Nodex> the second 50 is now the first 50 and the 3rd 50 are now the second and so on
[09:02:33] <gerryvdm> I actually was thinking about using an ObjectId as the column to sort on, would that be a solution?
[09:02:53] <Nodex> objectId gives you a asc/desc on insertion time
[09:03:12] <gerryvdm> Nodex: in that case I no longer have a "previous" page but I can still retrieve the next set
[09:03:28] <Nodex> you need a previous page to determine the next page
[09:03:28] <gerryvdm> I dont need the ability to go back a page
[09:03:38] <gerryvdm> i just want to synchronize all changes
[09:03:45] <Nodex> you need to know where you "where" to work out where you're "going"
[09:04:00] <Nodex> this is the nature of cursors
[09:04:28] <gerryvdm> well i "was" at the updatedat timestamp of the last document in my previous page
[09:04:55] <gerryvdm> but my problem is that i dont know how to make it unique
[09:05:16] <gerryvdm> i think my reasoning would work if a timestamp was unique
[09:05:18] <Nodex> there is no answer to the problem
[09:05:48] <Nodex> a query is a snapshot of results at the time of the query
[09:06:23] <gerryvdm> if i use ObjectId not only for the _id but also for a sort column, wouldnt that work? I'd create a new ObjectId on every update
[09:06:52] <Nodex> you would still need to page EVERY previous page to work out where you were
[09:07:06] <gerryvdm> i was at the last ObjectId?
[09:07:08] <Nodex> you would be submitting the same query over and over again
[09:07:26] <Nodex> what if something changes from page 1 ? - it shifts somethign to page 2
[09:07:41] <gerryvdm> yeah but thats fine, as long as I dont miss any changes
[09:07:44] <Nodex> then something new is in page 1's place ... it's an endless loop
[09:07:55] <Nodex> but you WILL miss changes by that very notion
[09:09:05] <gerryvdm> if i am in the middle of pagination, and i'm at everything updated since midnight last night, if something of a previous page changes right now, i would still get all changes and also my new change in the last page?
[09:09:14] <gerryvdm> wheres the error in my reasoning?
[09:09:21] <Nodex> how? if you're skipping 50?
[09:09:31] <gerryvdm> i'm not using skip at all
[09:09:40] <Nodex> [08:59:24] <Guest92111> when i paginate by 50 results for example, when retriev.......
[09:09:42] <gerryvdm> just $gte for the updated at
[09:09:49] <gerryvdm> i'm using limit yes
[09:10:19] <Nodex> so you're querying the same query every time?
[09:10:34] <Nodex> you just want the latest 50 documents sorted by update time?
[09:11:06] <gerryvdm> no say in my first page (by 2 results) i get two documents "updatedAt: 11:00, updatedAt: 12:00"
[09:11:31] <jinmatt> is this some kind of logging data that needs to be paginated, as per your description I have a feeling that its never possible to see the end of this list as your data entries keeps changing so fequent
[09:11:39] <gerryvdm> for my next query i need to ask the next 2 documents where updatedAt is greater than 12:00
[09:11:51] <gerryvdm> that works fine as long as updatedAt would be unique
[09:12:01] <Nodex> it's never going to be unique
[09:12:22] <gerryvdm> an objectid is gonna be unique
[09:12:29] <Nodex> then use an ObjectId
[10:54:28] <jinmatt> mms agent is having problems to connect to ec2 instance after adding firewall security rules, I’m getting the follow error from the Agent Log “Problem collecting blocking data: could not connect to ip-xx-xx-xx-xx:27017: timed out”
[10:54:40] <jinmatt> mongodb version is 2.4.5
[10:54:46] <jinmatt> any idea why?
[10:55:02] <jinmatt> if I change the rule to access from any IP, the agent works
[10:55:43] <jinmatt> otherwise even through I add the agent host machines’s IP address to access rule list on EC2, its not working
[11:38:40] <jinmatt> rules, I’m getting the follow error from the Agent Log “Problem collecting blocking data: could not connect to ip-xx-xx-xx-xx:27017: timed out”
[13:03:38] <Industrial> Hi.
[13:05:30] <Industrial> I'm trying to figure out what the best way to save my data is in mongodb. The data is time based so I'm using millisecond timestamps for the _id index of the collections, and per timestamp that happened there may be several keys set. What I'd like to do now is to split that into one collection per key kind instead.
[13:06:27] <Industrial> Because not every document may contain every key, when I make queries for certain key kinds, I'm also querying all data that does not contain those keys. I could eliminate that constraint by just querying a database for the specific data I'm looking for.
[13:07:24] <Industrial> With the Node.js driver, does calling `var collection = db.collection('test_insert');` do anything on the database end? no right? so I could potentially call this thousands of times per second?
[13:07:44] <Industrial> (I'll probably cache it anyway, later, though)
[13:08:20] <Industrial> because I'm receiving a LOT of messages, not knowing which collection they should land in.
[13:08:27] <Industrial> beforehand
[13:12:21] <dexteruk> Hi Everyone
[13:13:21] <Nodex> omg like hiya
[13:14:28] <Nodex> Industrial : your "var" might have a local or global scope, depending on where you set it
[13:15:31] <Industrial> Nodex: yeah, but there's notghing going forth and back between mongo and node, trying to 'fetch' a collection right? It just modifies calls made to that object to work on a certain collection rather then another?
[13:16:49] <dexteruk> im new to the world of mongodb, i have a quick question, im looking at the best way to structure my documents, i have a set of dynamic elements but i have read about Embedded Documents that the document can grow which can impact write performance, is this really an issue, and what things should i consider
[13:17:21] <Industrial> dexteruk: try and get soemthing rolling on the system first, and then worry about performance
[13:17:32] <Industrial> its the best approach :) imho, even for testing purposes
[13:20:32] <dexteruk> so its nothing i should really worry about
[13:21:29] <Nodex> dexteruk : you should think about your structure first. This is the exact opposite of other systems
[13:21:55] <Nodex> you should tailor your app to your data and the fastest way to read/write it else you will NEVER get good performance
[13:22:25] <Nodex> Industrial : the driver will communicate when there is something to send to the database
[13:23:16] <dexteruk> yes i have seen this, in some examples, where you put the content and embed the link
[13:24:19] <Nodex> eh?
[13:29:24] <Industrial> Nodex: k
[13:45:00] <paper_ziggurat> how would I make a key-value pair Collection?
[13:54:43] <balboah> paper_ziggurat: db.collection.insert({key: "key", value: "value"})
[13:55:02] <paper_ziggurat> i got it, balboah. thank you :)
[14:38:38] <ShortWave> Is there a way to get a distance calculation from a 2d query, without doing a calculation myself?
[14:39:47] <Nodex> one of them returns the distance, can''t recall which one that is
[14:47:55] <yug> Does anyone know if this is possible to do aggregation on aggregation with aggregation module? i.e first to aggregate by hours and the result of this I want to aggregate by day
[14:50:49] <rybnik> yug so, grouped by day and hours ?
[14:53:28] <rybnik> yug would something like this do the work {$group: {_id: { h: {$hour: '$tick'}, d: {$dayOfMonth: '$tick'} }, n: {$sum:1}}} ?
[14:55:38] <yug> rybnik no because this will group by both together. I need to group first by hour and the result to group by day
[14:56:00] <yug> do you understand what I mean?
[14:56:23] <rybnik> yug, sorry, could you illustrate with an example (of the expected result) - use a pastie please ?
[14:56:43] <yug> it's like putting it buckets all the data in hour resolution and then bucket by days
[14:56:48] <yug> I will illustrate
[14:56:51] <yug> just a sec
[14:57:24] <rybnik> yug sure :)
[15:01:52] <abique_> Hi, I'm using the C++ driver, and I wonder how to do a query "find by _id" is it conn.query("mydb", QUERY("_id" << "45645645645689765"));
[15:02:42] <abique_> Also, auto_ptr being deprecated, is it possible to use unique_ptr?
[15:02:44] <abique_> thanks
[16:04:46] <palominoz> hello there. given my multilanguage documents look like { id: 'some-mongo-id', locales: { en: {type: '}, it: { name: "pane"}}}
[16:08:06] <palominoz> hello there. given my multilanguage documents look like { id: 'some-mongo-id', locales: { en: {type: 'bread'}, it: { name: "pane"}}}, i would like to support fulltext search based on locale, like db.myCollection.runCommand("text", {search: "pane", language: "italian" }). I did run ensureIndex for each language indexable in the locales subdocument. mongo then stops me with error: too many text index for: production.manuf
[16:08:06] <palominoz> acturers. how would you solve this problem? misspressed enter button before :)
[16:09:01] <palominoz> is it possible to have a single index and then discriminate language on the results?
[16:34:47] <LinkRage> When I use $group in Mongo aggregation the output is array named result. How do I print *only* the inner fields of that array in Mongo CLI?
[16:44:18] <rybnik> LinkRage aggregate().result ?
[16:57:28] <LinkRage> rybnik, wow 10x :)
[17:02:33] <javahorn> Hello
[17:04:10] <NaN> hi javahorn
[17:04:22] <javahorn> Hi NaN
[17:04:34] <javahorn> you use MongoDB?
[17:04:57] <NaN> yep, learning
[17:23:05] <paper_ziggurat> is it possible to iterate over a collection and then return a value where a certain json element is equal to a predetermine value?
[17:23:19] <paper_ziggurat> like return value where value.element = something?
[17:24:02] <NaN> paper_ziggurat: find({'value.element': something}); ?
[17:24:25] <paper_ziggurat> well the value is the document already in the collection
[17:24:48] <demo> hey, anyone here works for mongo's nyc offices by any chance?
[17:27:24] <saalaa__> hello, anyone has any experience accessing 'local.system.namespaces' on MongoLab?
[17:28:05] <NaN> paper_ziggurat: so are you looking for "something" or "value"?
[17:28:48] <saalaa__> (specifically I'm trying to install an Elastic Search MongoDB River which needs access to it but doesn't seem to be able to access it:)
[17:30:39] <paper_ziggurat> NaN, I have a key:value pair, I want to iterate over the collection and return the value where one of the value's json elements is equal to something
[17:30:43] <paper_ziggurat> 'something'
[17:32:00] <rkgarcia> paper_ziggurat: a query that finds documents in a collection?
[17:48:20] <saalaa__> for those who might skim through the backlog, it turns out MongoDB's *.system.namespaces, which is needed for an Elastic Search River, is not available MongoLab's plan bellow "dedicated" offerings starting at 200 USD
[17:48:32] <saalaa__> the only alternative seems to be self-hosting
[17:48:46] <cheeser> or paying :)
[18:07:35] <paper_ziggurat> hey sorry all i had to migrate
[18:07:37] <paper_ziggurat> rkgarcia, yes
[18:08:00] <rkgarcia> paper_ziggurat: whats your problem? :D
[18:08:15] <paper_ziggurat> i'm not sure how
[18:08:19] <paper_ziggurat> do i use the where selector?
[18:08:36] <rkgarcia> paper_ziggurat: you need to use a document with the params
[18:08:37] <paper_ziggurat> the query is supposed to find a document 'where' a json element in the document is a certain value
[18:09:09] <rkgarcia> ok then db.collection.find({"json_element":"certain_value"})
[18:10:13] <paper_ziggurat> and it will return the whole document?
[18:15:04] <rkgarcia> yep
[18:15:08] <rkgarcia> in the mongodb shell
[18:39:30] <paper_ziggurat> rkgarcia, do i need the quotes around my query?
[18:49:48] <paper_ziggurat> it didn't work -_-
[18:58:36] <rkgarcia> pabst^: do you know json format?
[18:58:45] <rkgarcia> paper_ziggurat: do you know json format?
[18:58:48] <rkgarcia> pabst^: sorry
[18:58:56] <paper_ziggurat> yes
[18:59:15] <rkgarcia> mongo uses a json document for query
[18:59:23] <paper_ziggurat> it returned something undefined :(
[18:59:30] <rkgarcia> db.collection.find(where,select)
[18:59:42] <rkgarcia> where and select are json documents
[18:59:54] <rkgarcia> select (in mongodb project) it's optional
[19:00:17] <paper_ziggurat> hrmm
[19:00:28] <rkgarcia> paper_ziggurat: paste a document structure in a pastebin
[19:00:29] <paper_ziggurat> but it should return any document that contains the value i'm selecting
[19:00:41] <rkgarcia> no no paper_ziggurat
[19:00:51] <rkgarcia> the value needs to be exactly
[19:00:58] <rkgarcia> or you can use regex
[19:01:28] <paper_ziggurat> i thought find where select meant that it selects a document where a value in the document is equal to some other value
[19:02:07] <paper_ziggurat> i can't really paste the document 'structure' because i'm using xml2js
[19:02:37] <NaN> paper_ziggurat: http://docs.mongodb.org/manual/reference/method/db.collection.find
[19:03:19] <paper_ziggurat> looking at it right now
[19:03:28] <paper_ziggurat> but actually, i see this selector '$elemMatch'
[19:05:14] <NaN> paper_ziggurat: $elemMatch works on arrays
[19:05:25] <paper_ziggurat> yeah, i just found that out
[19:08:11] <paper_ziggurat> after running the query though, the value that's returned is a big JSON document with details about the collection
[19:33:29] <Guest43094> I’m using 2d indexes in a multiplayer game world to find players near my current position on the map that are online and of a certain type. I’d like to use a compound index for this, but this is obviously only possible with 2dsphere. What I’m not clear about is if the 2dspehere index will return different results than the 2d index for the same location/data.
[20:15:23] <NaN> where can I configure the GMT that new Date (mongo shell) gives
[20:15:40] <cheeser> what?
[20:15:45] <NaN> xD
[20:16:27] <NaN> when I do >new Date() it doesn't show my current GMT (I know it's ISO) but is not my current time, why?
[20:20:46] <cheeser> it shows the time in UTC, iirc
[20:24:09] <NaN> is there an option to configure the output
[20:24:14] <NaN> ?
[20:33:58] <slon> I have an unique index on multiple fields within a collection. When I insert a document containing duplicate data for multiple unique fields I only get back one error. Is there a way to get all errors back?
[20:35:28] <slon> I'm only getting back the first duplicate error, but I want to know all fields that are duplicates for the specific entry.
[20:36:18] <slon> like if name and email have unique separate unique indices and I'm adding a name and an email that both already exist, I want to get back errors about both fields. Currently, I'm only getting back errors for one field.
[20:44:50] <slon> Basically, I have a new document and I need to know which fields are duplicates.
[21:16:03] <proteneer> question about the mongo license, if I build mongo from source, then am I allowed to use SSL?
[21:16:11] <proteneer> (without paying for an enterprise license)
[21:19:00] <slon> Slon
[21:20:00] <slon> slon
[21:20:05] <slon> sLon
[21:20:09] <slon> slOn
[21:20:13] <slon> sloN
[21:20:19] <slon> slOn
[21:20:20] <ranman> slon?
[21:21:00] <slon> hey! I have a new document and I want to know which fields will be duplicates when inserted. Is there a way to get all duplicate field errors when inserting?
[21:21:05] <slon> Right now I'm just getting one
[21:21:10] <ranman> proteneer: yes
[21:21:25] <slon> :(
[21:21:29] <proteneer> so what is the big deal with charging $7.5k for enterprise then?
[21:21:34] <joannac> proteneer: sure.
[21:21:48] <ranman> proteneer: there are a lot of other things in enterprise
[21:21:48] <proteneer> if i can just build in the features from source?
[21:21:49] <joannac> proteneer: so you don't have to build it yourself everytime we re-release?
[21:22:20] <ranman> proteneer: it also comes with a support contract
[21:22:23] <proteneer> i probably won't be rebuilding it anyways with every new release
[21:22:24] <ranman> proteneer: https://www.mongodb.com/products/mongodb-subscriptions
[21:22:26] <joannac> proteneer: the enterprise subscription has a lot of features
[21:22:30] <joannac> not just product features
[21:22:45] <slon> money, money, MONEY!
[21:22:49] <proteneer> it's stupid to not include SSL when its as simple as adding in --ssl as the build option
[21:22:59] <slon> Money
[21:23:01] <slon> mOney
[21:23:04] <slon> moNey
[21:23:07] <slon> monEy
[21:23:10] <slon> moneY
[21:23:10] <ranman> slon: are you a markov chain or a human?
[21:26:06] <bob_123> hello, I have a question that seems silly but I can't get this to work for me: in a find query, how do you get mongo to match a document where a field value is 0?
[21:26:17] <bob_123> Isn't it just: collection.find( { field: 0 } ); ?
[21:26:28] <Derick> yes it is
[21:26:37] <bob_123> don't know why I can't get it to work
[21:26:40] <Derick> be careful though as "0" is not 0
[21:26:48] <bob_123> let met try "0"...
[21:27:35] <ranman> slon: have you considered findAndModify()? you could probably design the functionality you want around that
[21:28:53] <bob_123> if I was doing update, it would be the same, right: collection.update( { field: 0 }, { $set: { field: 1 } } ); ?
[21:29:05] <Derick> yes
[21:29:22] <ranman> proteneer: I think there are legal reasons you can't include certain things around TLS but I could be wrong
[21:41:36] <proteneer> does pymongo need to be rebuilt with ssl support/
[21:41:49] <proteneer> if i rebuild my servers with ssl support?
[21:42:01] <LoneSoldier728> hey is this query wrong User.findByIdAndUpdate(user, {$pullAll: {stories: storiesRemove}, $inc: {storiesCount: -storiesRemove.length}}, because I have an off by 1 error... on the total
[21:42:07] <LoneSoldier728> for the storiesCount
[22:10:24] <thearchitectnlog> hey anyone can help me i am migrating from mysql to mongodb
[22:13:43] <ranman> what problem are you hitting?
[22:14:19] <thearchitectnlog> i need to iterate in the child document and insert the parent object id into it
[22:18:10] <proteneer> dear god why is your build system scons
[22:26:45] <proteneer> btw are most mongo drivers supported to work out of the box with 2.6.0?
[22:26:53] <proteneer> or do the drivers also need an update?
[22:28:31] <ranman> proteneer: scons because of historical reasons, if you're feeling adventurous definitely check out some of the more exotic stuff in the scons file (also we support windows and needed cross platform builds)
[22:28:47] <ranman> proteneer: drivers will need an update to use 2.6 features yes
[22:28:58] <proteneer> it says the wire protocol changed
[22:29:10] <proteneer> does that mean even if I don't need the 2.6 features I will still need an update?
[22:30:51] <ranman> proteneer: no it will still support the old protocol