PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 28th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:20:32] <dgarstang> hellooooo!?
[00:26:09] <joannac> dgarstang: yes?
[00:34:04] <dgarstang> joannac: hey, so... I'm trying to get mongodb 2.6 with ssl. Any reason why I can't use the enterprise version? I thought it was going to prompt for a license key but no...
[00:42:39] <ryanhirsch> I have a 3 member replica set, I can connect to localhost on each of them, fine, failover and replication seems fine, but I can't connect from a remote box. From mongoose, I get "Error: No valid replicaset instance servers found" from the mongo client I get "Failed to connect to 10.3.164.132:27017, reason: errno:115 Operation now in progress" then " All nodes for set rsdev are down. This has happened for 1 checks in a row...."
[00:53:10] <joannac> ryanhirsch: well, can you connect to 10.3.164.132 ?
[00:55:22] <joannac> that's a weird error message though
[00:56:43] <ryanhirsch> on 132 I can connect just fine
[00:56:47] <ryanhirsch> its as a remote
[00:56:55] <ryanhirsch> its on a remote I seem to have issues
[00:58:06] <joannac> can you ping it?
[00:58:14] <joannac> can you telnet to it on 27017?
[00:58:42] <ryanhirsch> I can ping it
[00:59:39] <ryanhirsch> well maybe I can't...
[00:59:49] <ryanhirsch> if this is a stupid VLAN issue....
[01:08:27] <ryanhirsch> joannac, it does appear to be a network issue, thank you. Should have just gone back to basics
[01:13:49] <joannac> ryanhirsch: np, glad to help
[05:42:08] <obamacare> is mongodb like a big folder of json data?
[05:44:01] <obamacare> if so how would I do joins?
[05:45:42] <ranman> I think obamacare might be gerard oneil
[07:07:28] <mylord> how can i push an array element to position 0. $position:0 isn’t haven’t any effect
[07:07:40] <mylord> isn’t *having
[07:08:58] <mylord> db.turnys.update({"_id":ObjectId("5384fe0d62ff8c4aee6d3eb1")}, {$addToSet:{"users.u1":{$each:[ { uid:"u2", score:4, "joined":new Date() } ], $position:0}}})
[07:17:33] <mylord> who here among us is able to solve this? to add a “u1” doc to users at position 0 to this doc: https://gist.github.com/anonymous/f2ddd9b0596fab84d69d
[07:32:13] <mylord> solved: http://stackoverflow.com/questions/10131957/can-you-have-mongo-push-prepend-instead-of-append
[07:49:07] <cuzzo> If I have a mongodb document, it maxes at 16MB, right? About how many ObjectIDs can one document store? The size is 24 bytes, right? But I'm assuming there is overhead.
[08:31:40] <arussel> I have a collection with document with a timestamp. What would be the easiest way to group them by day ? ideally using an aggregate pipeline and even better being able to set the timezone myself.
[08:35:59] <rspijker> arussel: http://docs.mongodb.org/manual/reference/operator/aggregation/dayOfMonth/
[08:38:16] <arussel> rspijker thanks
[08:59:13] <milord> how can i do this? db.turnys.find({"_id":ObjectId("5384fe0d62ff8c4aee6d3eb1"),"users":{"u1":{"$exists":true}}});
[08:59:22] <milord> this works: db.turnys.find({"_id":"5384fe0d62ff8c4aee6d3eb1","users.u1":{"$exists":true}});
[09:06:23] <rspijker> milord: so? _id is a stirng instead of an ObjectId perhaps?
[09:06:35] <rspijker> _Id is allowed to be anything, as long as they are unique
[09:07:48] <milord> rspijker: typo, I have ObjectId around both versions
[09:08:20] <milord> but mongo like “users.u1” and not “users::{“u1”… but users.u1 is giving me pains to recreate in node
[09:09:23] <rspijker> milord: well, users.u1 is very different from users:{“u1”…}
[09:09:44] <rspijker> users.u1 : “x” means, select things where the u1 field of users is “x"
[09:10:12] <rspijker> users:{“u1”:”x”} means, select thing where users is EQUAL to the document {“u1”:”x”}
[09:10:36] <rspijker> that is, a document like {users:{“u1”:x”, “u2”:”y”}} will match the first, but not the second
[09:55:06] <arussel> I have this document: {a: "a", bs: [{d: 11}, {d: 12}]} the value of d is a timestamp (Long), is there a way to change it to a date object ?
[10:02:33] <rspijker> arussel: is it a timestamp in the mongo sense?
[10:02:41] <rspijker> or just a unix timestamp?
[10:02:49] <arussel> rspijker just a unix timestamp
[10:03:28] <arussel> looks like: 1400512120100.66
[10:04:09] <rspijker> that’s not a standard unix timestamp...
[10:05:06] <rspijker> is that, milliseconds since the epoch or something?
[10:05:49] <arussel> rspijker yes, adding nano after the dot ...
[10:06:38] <rspijker> I think you can just go: new ISODate(d) in that case...
[10:07:42] <rspijker> maybe not...
[10:08:03] <arussel> 2014-05-28T12:05:44.396+0200 invalid ISO date at src/mongo/shell/types.js:64
[10:08:26] <rspijker> new Date(d)
[10:08:50] <rspijker> > var d = 1400512120100.66
[10:08:50] <rspijker> > var date = new Date(d)
[10:08:51] <rspijker> > date
[10:08:52] <rspijker> ISODate("2014-05-19T15:08:40.100Z")
[10:10:50] <arussel> how do you wrap that in an aggregate ? db.foo.aggregate({$project: {"bs.d": new Date("$bs.d")}}) ?
[10:11:22] <arussel> maybe I should just divide the doc into multiple docs (if possible)
[10:20:14] <rspijker> arussel: should be possible in a projection, yes
[10:20:48] <rspijker> but it looks like your bs is actually an array of timestamps :/
[10:20:56] <rspijker> so you might have to unwind to get that to work properly
[10:21:59] <arussel> rspijker thanks for your help, I'll look into it
[10:22:07] <rspijker> good luck :)
[10:40:24] <arussel> for what I understand, to be able to use new Date() in my $project, I have to use $literal: {new Date(123)}, but then I can't use field path anymore so $literal: {new Date($b)} won't work ...
[11:21:01] <yeitijem> How to effectivly filter time stamped data by a timerange ???
[11:22:29] <arussel> found the way: db.foo.aggregate({$project: {date: {$add: [new Date(0), "$ts"]}}})
[11:22:42] <yeitijem> My idea was to store the milliseconds of the date ... and compare those
[11:23:13] <yeitijem> but it makes the documents less readable (and debuggable)
[11:23:20] <arussel> yeitijem aggregate using composite _id including the needed field
[12:05:05] <greybrd> hi guys.. in mongodb db.collection.mapReduce() command returns a json document which has details like { serverused, result, timing, counts }. how to understand that document better?
[12:13:49] <joannac> ~>
[12:41:14] <mylord> darn.. this was working.. now all of a sudden doesn’t? https://gist.github.com/anonymous/c584734abebf96578cb0
[12:42:03] <mylord> ^ that “-1” is there to prepend to array instead of append
[12:42:32] <mylord> as explained here: http://stackoverflow.com/questions/10131957/can-you-have-mongo-push-prepend-instead-of-append
[12:47:08] <NodJS> Hi. Does anybody can help me with MongoDB here?
[12:52:52] <NodJS> Is it possible to create document and make find query at once?
[12:53:22] <NodJS> I need to insert some document if some query is true
[12:56:00] <mylord> NodeJS: read docs on e.g. $upsert
[12:56:10] <mylord> (I think it’s possible)
[12:56:21] <mylord> is this data a nested array? https://gist.github.com/anonymous/c584734abebf96578cb0
[12:56:38] <mylord> “The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays"
[12:58:58] <NodJS> http://stackoverflow.com/questions/23908593/mongodb-how-to-insert-new-document-based-on-query-via-single-command
[12:59:30] <joannac> mylord: use $each and $position
[13:00:17] <mylord> joannac: ya, that worked for me now.. but now how can I update “score” for u1.-1 ? ie, u1.$.score:3
[13:00:50] <joannac> what does u1.-1 refer to?
[13:01:12] <NodJS> I need to insert a new document if there is no date overlaps
[13:01:45] <mylord> joannac: here’s the prob now: https://gist.github.com/anonymous/3f86628883d20567e0b7
[13:02:23] <joannac> you can't use the positional operator without specifying which entry you want?
[13:03:09] <mylord> u1.-1 was working to specify the 1st element in array, both for $update - $push and $set. maybe I had older mongo version running, idk, but now that doesn’t work, and now, ya, $each $position:0 works for inserting a users.u1 record, but not for updating the score of that 1st u1
[13:03:42] <joannac> why not just update u1.0?
[13:03:51] <mylord> joannac: ok, thx! so, if I want to update the first users.u1 element’s score, how do I do that?
[13:05:02] <joannac> ... update u1.0
[13:06:09] <mylord> like this? db.turnys.update({"_id":ObjectId("5385d74f9cf5b1660b6c50b7")}, {"$set":{"users.u1.0":[{"score":3}]}})
[13:06:32] <joannac> no
[13:06:48] <joannac> users.u1.0 is the first element in the users.u1 array
[13:06:58] <joannac> set it to a document, not another array
[13:07:20] <mylord> right.. oops, typo from before.. thx again!
[13:54:09] <saml> herror
[13:54:13] <saml> how can i do things?
[13:55:05] <saml> in collection, i have documents with field x. i want to cut the value of x, $push to array arr
[13:55:30] <saml> before: {x: "foobar"} after: {arr:[{y: "foobar"}]}
[13:55:42] <saml> can it be done with single .update? should i write a script?
[13:56:26] <saml> db.coll.update({x:{$exists:1}, arr:{$exists:0}}, {$push: {arr: ?????}} , {multi:1,upsert:0})
[13:56:31] <saml> what should be ?????????/
[14:04:15] <saml> https://gist.github.com/saml/c74bcc233a1724ea0365 how do i do this
[14:05:47] <saml> forEach right?
[14:36:45] <LyndsySimon> Question - I understand that MongoDB can do document-level atomic operations, but cannot guarantee atomicity in multi-object operations. Can it guarantee that the state of a single object has not changed between two or more sequential operations? i.e., is it possible to lock an object, read it, then update that object with a guarantee that the object hasn’t changed since the read took place?
[14:37:27] <Derick> it can, but only through findAndModify
[14:37:39] <Derick> but there is no locking
[14:38:09] <LyndsySimon> I think that’s a “no”, then. findAndModify is a single operation even though it performs two actions on the database side.
[14:38:10] <cheeser> yet ;)
[14:38:13] <LyndsySimon> Yet?
[14:38:16] <cheeser> it's a no
[14:38:16] <Derick> LyndsySimon: correct
[14:38:30] <Derick> LyndsySimon: there are workarounds of course
[14:38:45] <LyndsySimon> Are there plans to allow single- or multi-object locking in the future?
[14:39:42] <LyndsySimon> Derick: I’ve not found a suitable workaround yet, particularly for operations that impact multiple objects. Is there a workaround that will guarantee atomicity?
[14:40:08] <Derick> LyndsySimon: such as: http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/#auto-increment-optimistic-loop
[14:40:19] <Derick> LyndsySimon: not for multiple documents I suppose
[14:40:45] <Derick> you need to use compare and updates techniques that the URL that I posted describes
[14:41:02] <Derick> LyndsySimon: if you can re-architect your schema so that "multiple objects" are just one document, that is probably better
[14:41:55] <LyndsySimon> That’s what I’m thinking. I don’t know if it will be possible in all cases, but combined with an append-only log that we can use to verify state on a regular basis, I think it might be enough.
[14:42:33] <Derick> LyndsySimon: yeah, that's basically how MongoDB's (internal) journal works too
[14:42:37] <Derick> MANCHUCK: hello
[14:42:38] <LyndsySimon> I need a “source of truth”, and MongoDB doesn’t seem to be able to provide that, at least not in the way we’re using it.
[14:42:51] <Derick> "not in the way we’re using it."
[14:42:51] <MANCHUCK> hey Derick
[14:42:54] <Derick> that's important :-)
[14:43:12] <MANCHUCK> source of truth?
[14:43:28] <Derick> MANCHUCK: you missed half the conversation. Had a safe trip back?
[14:43:30] <LyndsySimon> If I do an append-only log that can be used to calculate proper state, then I have a source of truth. Interestingly, MongoDB seems to be an ideal place to store that sort of log stream.
[14:43:45] <MANCHUCK> it was hetic
[14:43:53] <MANCHUCK> my flight from BOS to NWK was delayed
[14:43:58] <MANCHUCK> so I wound up renting a car
[14:44:01] <MANCHUCK> and that was a nightmare
[14:44:06] <MANCHUCK> never rent from Avis
[14:44:10] <LyndsySimon> Oh my God dude - why would you fly to Newark?!
[14:46:04] <MANCHUCK> i needed to be on that side of the hudson after tek
[14:46:26] <MANCHUCK> trying to get through the bronx on a holiday weekend is way worse then newark
[14:48:54] <saml> LyndsySimon, mongodb is eventually consistent-ish thingy
[14:51:14] <LyndsySimon> MANCHUCK: I live in Virginia, moved here about a year ago from Arkansas. DC, New Jersey and Maryland scare me to death. It’s like a different country as soon as you cross the border.
[14:51:39] <MANCHUCK> trust me i know
[14:51:42] <MANCHUCK> im from longisland
[14:51:50] <MANCHUCK> NJ is its own world there
[14:52:09] <LyndsySimon> saml: Yeah, I know. I’m trying to make sure I understand all the implications of Mongo’s architecture, while simultaneously separating out the assumptions we’ve made within our application from the actualy requirements of the data we’re storing.
[14:52:35] <saml> taht sounds real serisous business professional
[14:52:37] <LyndsySimon> I’m finding that some of our assumptions are unnecessary, while some things that aren’t explicitly defined in our models are actually really important.
[14:53:02] <LyndsySimon> saml: Says the user named after an XML-based security protocol :)
[14:53:30] <saml> what are you using mongodb for?
[14:53:38] <LyndsySimon> Everything, of course! :)
[14:53:45] <saml> i work for a media site. we put articles in mongodb
[14:54:07] <LyndsySimon> Sounds like a good use. I’d rather not say here honestly. I’d be happy to discuss in PM.
[14:54:17] <saml> and have oplog tailing updating denormalized docs
[14:54:31] <saml> ah i see. makes sense
[14:54:39] <LyndsySimon> It’s open source, but I’m cautious of sharing potential issues in an open channel before they’re resolved in production.
[14:54:59] <saml> that's good
[14:55:36] <LyndsySimon> “oplog tailing” - that’s a new term to me, but it sounds like what I’ve got in mind. Do you a resource describing the concept?
[14:56:24] <arussel> is there something in mongo that can turn a timestamp in a date if I give it a timezone ? or is it just what is available in javascript ?
[14:57:34] <saml> LyndsySimon, http://stackoverflow.com/questions/9691316/how-to-listen-for-changes-to-a-mongodb-collection http://derickrethans.nl/mongodb-and-solr.html
[14:58:00] <saml> https://github.com/10gen-labs/mongo-connector
[14:58:19] <MANCHUCK> LyndsySimon, I am getting a talk together about how to effictvely use MongoDB
[14:58:37] <MANCHUCK> sounds kinda like what you are talking about
[14:59:10] <MANCHUCK> arussel, mongodate does not store Time zones
[15:00:05] <arussel> MANCHUCK so if I want to group data by date as seen in california, I need to do it 'manually' ?
[15:00:48] <arussel> maybe add 7 hours in millis to the timestamp ... (or 8 sometimes)
[15:00:49] <MANCHUCK> yes you would have to figure out the date outside of mongo
[15:01:01] <MANCHUCK> Derick, talks about this with his DateTime Talks
[15:01:29] <arussel> this breaks my pipeline to do it outside mongo
[15:02:09] <arussel> I'll try to send the diff between california and UTC in milis and adjust the timestamp so I can group by day inside mongo
[15:02:47] <MANCHUCK> keep in mind you will have the same issue with time stamps in general
[15:02:57] <MANCHUCK> they do not contain the TZ info
[15:03:49] <arussel> I acutally got the timezone in another field
[15:04:10] <LyndsySimon> arussel: careful with sending in a time delta from outside the DB - they aren’t constant. If your date range crosses a daylight savings changeover, you’ll have two deltas to deal with.
[15:04:39] <LyndsySimon> arussel: That’s why many databases don’t try to store datetimes with timezone information, and those that do often get it wrong. It’s an extremely complex issue.
[15:05:08] <Derick> as for timezones, I am about to start work on a proposal to get thta into MongoDB... any real life reasons why you'd need that, please email me at derick (at) mongodb /cc arussel
[15:05:42] <Derick> LyndsySimon: yes, so complex I wrote about about it (and implemented it in PHP!)
[15:05:59] <arussel> java starts to get it right
[15:06:15] <Derick> yeah, YodaTime and PHP's implementation are fairly similar
[15:06:16] <rspijker> we’re actually running into some major issues with it… Considering to just let go off timezones altogtether
[15:06:53] <saml> arussel, store UTF only in db.
[15:06:57] <saml> UTC
[15:07:09] <saml> client can convert to local time from UTC
[15:07:22] <LyndsySimon> rspijker: That’s how I handle it. I’m a Python dev, and I only deal with UTC on the serverside, period. Use moment.js or something else on the client side to translate them to user-facing strings.
[15:07:27] <saml> and client converts to UTC before sending to db
[15:07:42] <arussel> I've got timestamp in the db, but I need 'get me yesterday stats' as seen in california
[15:07:53] <LyndsySimon> When you think about it, timezone translation is actually presentation. The UTC time is the actual data.
[15:08:31] <LyndsySimon> arussel: Cool, then you should be able to easily figure a timestamp range in the application and pass that in instead of a date :)
[15:09:07] <arussel> LyndsySimon sure, but then I need more complex stuff and need to group by day
[15:09:48] <arussel> so group using day, year, month but this is utc and not pdt or whatever
[15:10:27] <arussel> I've already spent 2 hours just to get a date object out of a timestamp field :-)
[15:10:27] <Derick> you can do a project and add stuff to the date/timestamp
[15:11:02] <saml> arussel, give me example doc. and give me json that you want
[15:11:07] <arussel> yeah, pre-processing the data puting the date would be the best
[15:11:22] <saml> {timestamp:234242344} => {"date": "2014-01-02T00:00:00Z"} ?
[15:11:48] <arussel> yep, I know how to do it now, but I had a lot of trouble finding the solution
[15:12:01] <saml> give me solution
[15:12:03] <saml> so i can learn
[15:12:12] <saml> did you blog ?
[15:12:27] <arussel> date: {$add: [new Date(0), "$ts"]}}
[15:13:09] <saml> oh i see
[15:13:10] <arussel> no blog, it sucks up to much time
[15:13:24] <saml> with blog, you get to be youtube celebrity and rich
[15:14:01] <rspijker> that’s true of course… think of all the famous mongodb bloggers
[15:15:32] <saml> what's $ts ?
[15:15:42] <arussel> the timestamp
[15:15:49] <saml> that's your field's name?
[15:16:02] <arussel> I'm lucky to have enough customer without having to blog
[15:16:18] <saml> $add is .update() ?
[15:16:32] <saml> aggregation
[15:16:46] <arussel> yes, this is for aggregation
[15:17:27] <LyndsySimon> I’m not a Mongo expert, unfortunately - but grouping by day seems like a problem that a simple query won’t solve, becuase you’re grouping by a loosely-defined, derived attribute of the document. I *think* the right approach there would be a map/reduce, but I hesitate to make a recommendation as I don’t think I’m yet qualified to do so.
[15:18:27] <rspijker> LyndsySimon: aggregation framework can do it, but is expects a date object and not a timestamp
[15:19:35] <LyndsySimon> without a timezone correction applied, a date object doesn’t seem to be what we’re really wanting to group by here.
[15:21:39] <rspijker> and how would map-reduce alleviate that concern?
[15:22:17] <Derick> rspijker: you can convert from timestamp to Date with it, and vv (IIRC)
[15:23:31] <arussel> Derick my use case is really statistics such as: I'm computing how many customer per day in a real life shop in san diego. I can't do that in mongo if I have only the timestamp
[15:24:15] <arussel> because if the shop is closed on saturday/sunday, I want 0 customer and not some leftover because of timezone error
[15:25:03] <rspijker> Derick: can’t you do that in aggregation as well?
[15:25:18] <arussel> (my use case is not a real shop, but a situation that if it would be very visible if there is a 7 hours shift in the computing of statistics)
[15:28:05] <rspijker> arussel: Either way, if you need to take it into account, take it into account… But it all starts with data, you need to have the information available and, preferrably, in a format that will make your live easier
[15:29:14] <arussel> rspijker very true, I need to do a bit more work in the processing of data before inserting into mongo
[15:37:33] <remlabm> looking for advice in creating a 'timeline' based schema.. similar to how a twitter would be setup.. but want to get a better idea on how collections should be broken up and if there are any mongodb helpers or best practices for this scenario
[15:42:06] <MANCHUCK> mongoid stores the time stamp
[15:42:24] <MANCHUCK> so really it is just insterting into a colleciton
[16:12:23] <djlee> Hi all, in MySQL you can specify a custom sort order for the results (so if i want the id's in the order of [1,5,10,2,100,3] i can specify that in the ordering clause. Does mongo support doing a similar sort of thing?
[16:12:47] <djlee> or do you have to order by a document attribute and thats it
[16:35:47] <kali> djlee: by a field value, and you want to have an index matching the filtering criteria and the sort order (in that order) if there are more than 1000 documents to order
[16:50:20] <garrilla> i'm creating an index on collection with 21^6 documents. The first 90% of the index took about 2 hours, the next 5% has taken about 3 hours and the current 1% is chugging away. There doen't seem to be much disk activity and ram use is very low and stable. Does this sound par for the course?
[17:21:41] <dgarstang> I could really use some help building RPM's for mongo. Very few resourses available.
[17:31:02] <dgarstang> Man, this place is dead.
[17:58:35] <hpekdemir> hi all
[17:58:59] <hpekdemir> I wonder. if I write into a collection and at the same time read a collection (a document in it), do I get any errors?
[17:59:11] <hpekdemir> so the question is: is concurrent read/write possible?
[17:59:25] <hpekdemir> on the same document
[17:59:45] <hpekdemir> or even same field
[17:59:57] <cheeser> no
[18:06:45] <hpekdemir> ok my goal is to have a centralized pool of information about several routers/switches. I use a perl script to fetch information via netconf protocol. I would like to fill a mongodb with that information
[18:06:58] <hpekdemir> fetching information will happen every minute.
[18:07:30] <hpekdemir> but since I don't have any control about icinga/nagios's scheduling queue, it could be that nagios wants to fetch the mongodb document informations
[18:07:40] <hpekdemir> and at the same time the perl script could write into that document.
[18:07:48] <cheeser> one would block the other
[18:07:51] <hpekdemir> how could I solve this theorytical problem?
[18:09:42] <hpekdemir> cheeser: block. so with a given timeout it should still work?
[18:14:16] <saml> hpekdemir, you can't write and read at the same time
[18:15:33] <saml> no i'm wrong
[18:16:08] <saml> how much data do you write every minute? how much data do you read?
[18:16:24] <cheeser> well, you can't. those operations will be interleaved in form or another.
[18:22:19] <hpekdemir> saml: I fetch a bulk of information (returned in xml), process it, write to mongodb. and this every minute. it is about up to 100 entries
[18:22:26] <hpekdemir> read and write.
[18:23:08] <saml> with single mongod, read performance could degrade while you write.
[18:23:23] <saml> so you set up a replicaset. write to master. read from slave
[18:24:02] <cheeser> provided you have certain tolerances for potentially stale data.
[18:24:10] <cheeser> but 100 writes/minute is nothing to worry about.
[18:26:10] <hpekdemir> cheeser: 100 writes for one device
[18:26:42] <cheeser> ok...
[18:26:49] <hpekdemir> the checks will be for all devices every minute. 40 devices in total
[18:26:56] <hpekdemir> 42
[18:27:07] <saml> 42 is good
[18:27:32] <saml> hpekdemir, your main concern is performance?
[18:27:58] <hpekdemir> saml: not that important. I want atomic, consistent data.
[18:28:02] <saml> mongodb can handle writes and reads just fine
[18:28:07] <saml> oh then i'd use something else
[18:28:41] <saml> there are many databases with ACID
[18:29:01] <cheeser> updates to documents are atomic
[18:29:31] <saml> hpekdemir, data is normalized? are they related at all?
[18:29:45] <saml> document A is referenced by documents A1, A2, A3, ... etc
[18:29:46] <hpekdemir> not related at all
[18:30:05] <saml> give mongodb a go.
[18:30:10] <saml> it's easy to get started
[18:31:07] <hpekdemir> ok
[18:31:18] <dgarstang> I am trying, in vain, to get rpm's for mongo 2.6 with ssl.
[18:31:29] <hpekdemir> I will start testing it. and later consider to use sharding
[18:31:42] <hpekdemir> or master/slave replicasets
[18:32:03] <dgarstang> The spec files in the github source don't work, and the non enterprise rpm's at http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/ don't have SSL
[18:32:19] <dgarstang> There's also no SRPM's at http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/
[18:32:52] <jareddlc> hey guys, i have a quick question setting up my conf file
[18:33:00] <jareddlc> an older version of mongo im using has the line
[18:33:06] <dgarstang> Been at this a week and I think the boss is starting to get irritated with me. :(
[18:33:14] <jareddlc> bind_ip = 127.0.0.1,<anotherIP>
[18:33:24] <jareddlc> but this causes an error with v2.6+
[18:33:32] <jareddlc> i couldnt find it documented
[18:33:47] <jareddlc> ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:27017
[18:35:01] <cheeser> that's not a problem with 2.6. you just have something on that port already.
[18:35:08] <cheeser> lsof can tell you what it is.
[18:35:25] <jareddlc> lsof?
[18:35:59] <jareddlc> ohhh
[18:36:03] <jareddlc> do i grep for mongo?
[18:36:36] <cheeser> you grep for that port primarily.
[18:36:43] <cheeser> probably an old mongo process running, though.
[18:37:39] <dgarstang> Well, I'm (in)conveniently located in Palo Alto. Gonna do to this today. http://www.meetup.com/MongoDB-SV-User-Group/events/177953032/ Maybe I'll get some answers there
[18:37:56] <jareddlc> ive tried looking for that port being open but no luck
[18:38:17] <jareddlc> anyways, thanks for the help. removing that line solved my problem. Im just not sure whats the properway to add an IP
[18:39:49] <saml> i have 50% of documents with modifiedDate field. 50% don't. how would I paginate the collection?
[18:40:09] <saml> should i add very old modifiedDate to all documents who don't have the field?
[18:41:27] <dgarstang> How about compiling 2.6 on CentOS? That fails with "scons: *** [build/linux2/64/ssl/mongo/mongoimport] Error 1"
[18:58:06] <dgarstang> Anyone know where I can get mongodb RPM's for version 2.6.x with SSL support?
[19:03:16] <radixo> hello guys.. I have just installed mongodb in my system... how can I setup user and password for login?
[19:05:21] <cheeser> dgarstang: that's an enterprise feature and not available in the RPMs provided via yum
[19:06:36] <arussel> radixo http://docs.mongodb.org/manual/tutorial/enable-authentication/
[19:06:54] <arussel> radixo note that you don't *have to* do it
[19:07:59] <radixo> arussel: why?
[19:08:44] <arussel> radixo you can use un-authenticated connection to mongo
[19:09:10] <arussel> this is usually the first step wiht postgres/mysql, but not with mongo, you can work without
[19:09:25] <dgarstang> cheeser: I noticed. For $25,000 per year I'll keep looking and asking. :)
[19:09:37] <cheeser> it's not that much
[19:09:53] <cheeser> i don't think...
[19:09:55] <radixo> arussel: I know.. but I want a secure layer
[19:10:11] <dgarstang> cheeser: $2,500 per shard instance... I don't know how many shards we will need but with 3 nodes per shard...
[19:10:21] <dgarstang> That's at least 6x$2500
[19:10:51] <arussel> radixo sure, I was just metionning it
[19:11:22] <radixo> arussel: thanks..
[19:11:29] <dgarstang> I suppose there isn't much incentive to provide instructions on how to build your own
[19:13:44] <cheeser> http://www.mongodb.org/about/tutorial/build-mongodb-on-linux/
[19:13:48] <cheeser> google++
[19:13:54] <dgarstang> Trying to use scons to build r2.6.1 and r2.6.0 from source results in build errors.
[19:14:11] <dgarstang> on both centos and ubuntu
[19:14:52] <dgarstang> cheeser: yep, did that. got errors like "collect2: error: ld returned 1 exit status ; cons: *** [build/linux2/64/ssl/mongo/mongoimport] Error 1"
[19:15:12] <cheeser> can't help you there. it's all black magic to me.
[19:21:51] <saml> I have a date field {date: ISODate("2014-05-21T20:18:33.617Z")} how can I find documents with same date?
[19:22:03] <cheeser> $eq
[19:22:06] <saml> most documents will have unique date
[19:22:08] <cheeser> ?
[19:22:22] <saml> something like group by. but discard all single document groups
[19:22:44] <saml> or.. i want to verify a collection has all documents with unique date
[19:23:02] <saml> just try to ensure unique index on date and see if it fails?
[19:29:14] <radixo> arussel: why I can not connect to mongo even on localhost after authorization: enable ?
[19:29:52] <cheeser> are you passing credentials when you try to connect?
[19:35:45] <radixo> I can't connect to mongo even on localhost after authorization: enable .. why?
[19:36:33] <cheeser> 15:28 < cheeser> are you passing credentials when you try to connect?
[19:40:45] <radixo> cheeser: no
[19:41:10] <radixo> sorry the delay..
[19:41:44] <cheeser> well then...
[19:43:06] <radixo> cheeser: and the Localhost Exception?
[19:43:10] <radixo> how does it work?
[19:43:50] <cheeser> i don't know offhand
[19:49:41] <radixo> someone here knows how Localhost exception works?
[19:50:23] <cheeser> http://docs.mongodb.org/manual/core/authentication/#localhost-exception
[19:51:39] <radixo> cheeser: but it is not working
[19:51:41] <radixo> ;(
[19:51:45] <radixo> :(
[20:06:32] <KrzyStar> Hi there :)
[20:09:49] <KrzyStar> What's the exact difference between mongojs and mongodb native driver for node?
[20:10:20] <KrzyStar> From what I've read, mongojs wraps the latter, but what's the point? :p
[20:28:07] <jesswa> I have a query with $or statements inside of an $and - as soon as I add that $and, my queries stop using the indexes… Does anybody have a hint as to why nesting these might skip the indexes, or have a good reference on how mongo indexes $or inside of an $and?
[20:43:07] <saml> jesswa, testcase
[20:44:05] <saml> https://jira.mongodb.org/browse/SERVER-3327
[20:44:57] <saml> Aaron Staple planned to implement that in 2011 but then he quit
[20:46:47] <jesswa> @saml, have a compound index on three fields and trying to query on various combinations of those three fields (in correct order)… trying to rewrite the query as individual permutations of each $or… will post a pastebin if this doesn't work out :)
[20:47:31] <saml> i don't know.. just give us test case? few db.coll.inserts db.coll.ensureIndex.. and queries you're running
[20:54:28] <jesswa> @saml untested, but I'm doing something like this http://pastebin.com/VFM3eUYF
[20:55:12] <jesswa> after I rewrote the I had two $or statements inside of an $and, but changed it into 6 $or statements at the top level and explain() is showing that it is using the correct indexes now
[20:57:20] <saml> jesswa, you don't need $and
[20:57:31] <saml> {a:b,c:d} means a:b and c:d
[20:57:40] <saml> db.feeds.find({ 'name': 'foo', '$or': [{'size': {"$gt": 25}}, {'size': 0}, {'isDirty':true}]} ).explain();
[20:57:45] <jesswa> I know, the original query is a bit more complex
[20:57:45] <saml> so do that instead of $and
[20:57:50] <jesswa> but that is the example that breaks
[20:58:14] <jesswa> the first query (line 7) will use the index, the second query (line 12) will not
[20:58:37] <jesswa> and the only difference is that '$and' operator
[20:59:13] <jesswa> so I was wondering why that would suddenly stop using the index?
[21:09:44] <jhuel> hi i have a problem building the C++ driver
[21:10:22] <jhuel> i get LINK : fatal error LNK1104: cannot open file 'boost_thread-vc110-mt-1_49.lib' when trying to call this scons $ARGS install-mongoclient --dynamic-windows --sharedclient
[21:11:15] <jhuel> but it builds right when i call scons $ARGS install-mongoclient or scons $ARGS install-mongoclient --dbg=on
[21:11:19] <jhuel> help!
[21:30:26] <ngl> Hi all. I'm trying to do a writeTo on the the server side, but I don't seem to have the File class... so I can't just: file.writeTo(new File("/tmp/smithco.pdf"));. How do I get the File class referenced in mongo docs?
[21:31:49] <ngl> I am getting nothing from searching for answers, so I'm here. :D
[21:37:37] <dgarstang> So... how does one load balancer two routers? :)
[22:13:08] <KrzyStar> Interesting... mongo shell segfaults immidiately after starting up
[22:13:18] <KrzyStar> Must be something wrong with my installation I guess..
[23:24:00] <tafryn> Does the c++ driver support upsert? I can't find it in the api.