[00:34:04] <dgarstang> joannac: hey, so... I'm trying to get mongodb 2.6 with ssl. Any reason why I can't use the enterprise version? I thought it was going to prompt for a license key but no...
[00:42:39] <ryanhirsch> I have a 3 member replica set, I can connect to localhost on each of them, fine, failover and replication seems fine, but I can't connect from a remote box. From mongoose, I get "Error: No valid replicaset instance servers found" from the mongo client I get "Failed to connect to 10.3.164.132:27017, reason: errno:115 Operation now in progress" then " All nodes for set rsdev are down. This has happened for 1 checks in a row...."
[00:53:10] <joannac> ryanhirsch: well, can you connect to 10.3.164.132 ?
[00:55:22] <joannac> that's a weird error message though
[00:56:43] <ryanhirsch> on 132 I can connect just fine
[07:17:33] <mylord> who here among us is able to solve this? to add a “u1” doc to users at position 0 to this doc: https://gist.github.com/anonymous/f2ddd9b0596fab84d69d
[07:49:07] <cuzzo> If I have a mongodb document, it maxes at 16MB, right? About how many ObjectIDs can one document store? The size is 24 bytes, right? But I'm assuming there is overhead.
[08:31:40] <arussel> I have a collection with document with a timestamp. What would be the easiest way to group them by day ? ideally using an aggregate pipeline and even better being able to set the timezone myself.
[08:59:13] <milord> how can i do this? db.turnys.find({"_id":ObjectId("5384fe0d62ff8c4aee6d3eb1"),"users":{"u1":{"$exists":true}}});
[08:59:22] <milord> this works: db.turnys.find({"_id":"5384fe0d62ff8c4aee6d3eb1","users.u1":{"$exists":true}});
[09:06:23] <rspijker> milord: so? _id is a stirng instead of an ObjectId perhaps?
[09:06:35] <rspijker> _Id is allowed to be anything, as long as they are unique
[09:07:48] <milord> rspijker: typo, I have ObjectId around both versions
[09:08:20] <milord> but mongo like “users.u1” and not “users::{“u1”… but users.u1 is giving me pains to recreate in node
[09:09:23] <rspijker> milord: well, users.u1 is very different from users:{“u1”…}
[09:09:44] <rspijker> users.u1 : “x” means, select things where the u1 field of users is “x"
[09:10:12] <rspijker> users:{“u1”:”x”} means, select thing where users is EQUAL to the document {“u1”:”x”}
[09:10:36] <rspijker> that is, a document like {users:{“u1”:x”, “u2”:”y”}} will match the first, but not the second
[09:55:06] <arussel> I have this document: {a: "a", bs: [{d: 11}, {d: 12}]} the value of d is a timestamp (Long), is there a way to change it to a date object ?
[10:02:33] <rspijker> arussel: is it a timestamp in the mongo sense?
[10:40:24] <arussel> for what I understand, to be able to use new Date() in my $project, I have to use $literal: {new Date(123)}, but then I can't use field path anymore so $literal: {new Date($b)} won't work ...
[11:21:01] <yeitijem> How to effectivly filter time stamped data by a timerange ???
[11:22:29] <arussel> found the way: db.foo.aggregate({$project: {date: {$add: [new Date(0), "$ts"]}}})
[11:22:42] <yeitijem> My idea was to store the milliseconds of the date ... and compare those
[11:23:13] <yeitijem> but it makes the documents less readable (and debuggable)
[11:23:20] <arussel> yeitijem aggregate using composite _id including the needed field
[12:05:05] <greybrd> hi guys.. in mongodb db.collection.mapReduce() command returns a json document which has details like { serverused, result, timing, counts }. how to understand that document better?
[12:56:21] <mylord> is this data a nested array? https://gist.github.com/anonymous/c584734abebf96578cb0
[12:56:38] <mylord> “The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays"
[13:01:12] <NodJS> I need to insert a new document if there is no date overlaps
[13:01:45] <mylord> joannac: here’s the prob now: https://gist.github.com/anonymous/3f86628883d20567e0b7
[13:02:23] <joannac> you can't use the positional operator without specifying which entry you want?
[13:03:09] <mylord> u1.-1 was working to specify the 1st element in array, both for $update - $push and $set. maybe I had older mongo version running, idk, but now that doesn’t work, and now, ya, $each $position:0 works for inserting a users.u1 record, but not for updating the score of that 1st u1
[14:36:45] <LyndsySimon> Question - I understand that MongoDB can do document-level atomic operations, but cannot guarantee atomicity in multi-object operations. Can it guarantee that the state of a single object has not changed between two or more sequential operations? i.e., is it possible to lock an object, read it, then update that object with a guarantee that the object hasn’t changed since the read took place?
[14:37:27] <Derick> it can, but only through findAndModify
[14:38:30] <Derick> LyndsySimon: there are workarounds of course
[14:38:45] <LyndsySimon> Are there plans to allow single- or multi-object locking in the future?
[14:39:42] <LyndsySimon> Derick: I’ve not found a suitable workaround yet, particularly for operations that impact multiple objects. Is there a workaround that will guarantee atomicity?
[14:40:08] <Derick> LyndsySimon: such as: http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/#auto-increment-optimistic-loop
[14:40:19] <Derick> LyndsySimon: not for multiple documents I suppose
[14:40:45] <Derick> you need to use compare and updates techniques that the URL that I posted describes
[14:41:02] <Derick> LyndsySimon: if you can re-architect your schema so that "multiple objects" are just one document, that is probably better
[14:41:55] <LyndsySimon> That’s what I’m thinking. I don’t know if it will be possible in all cases, but combined with an append-only log that we can use to verify state on a regular basis, I think it might be enough.
[14:42:33] <Derick> LyndsySimon: yeah, that's basically how MongoDB's (internal) journal works too
[14:43:28] <Derick> MANCHUCK: you missed half the conversation. Had a safe trip back?
[14:43:30] <LyndsySimon> If I do an append-only log that can be used to calculate proper state, then I have a source of truth. Interestingly, MongoDB seems to be an ideal place to store that sort of log stream.
[14:44:10] <LyndsySimon> Oh my God dude - why would you fly to Newark?!
[14:46:04] <MANCHUCK> i needed to be on that side of the hudson after tek
[14:46:26] <MANCHUCK> trying to get through the bronx on a holiday weekend is way worse then newark
[14:48:54] <saml> LyndsySimon, mongodb is eventually consistent-ish thingy
[14:51:14] <LyndsySimon> MANCHUCK: I live in Virginia, moved here about a year ago from Arkansas. DC, New Jersey and Maryland scare me to death. It’s like a different country as soon as you cross the border.
[14:52:09] <LyndsySimon> saml: Yeah, I know. I’m trying to make sure I understand all the implications of Mongo’s architecture, while simultaneously separating out the assumptions we’ve made within our application from the actualy requirements of the data we’re storing.
[14:52:35] <saml> taht sounds real serisous business professional
[14:52:37] <LyndsySimon> I’m finding that some of our assumptions are unnecessary, while some things that aren’t explicitly defined in our models are actually really important.
[14:53:02] <LyndsySimon> saml: Says the user named after an XML-based security protocol :)
[14:55:36] <LyndsySimon> “oplog tailing” - that’s a new term to me, but it sounds like what I’ve got in mind. Do you a resource describing the concept?
[14:56:24] <arussel> is there something in mongo that can turn a timestamp in a date if I give it a timezone ? or is it just what is available in javascript ?
[14:58:19] <MANCHUCK> LyndsySimon, I am getting a talk together about how to effictvely use MongoDB
[14:58:37] <MANCHUCK> sounds kinda like what you are talking about
[14:59:10] <MANCHUCK> arussel, mongodate does not store Time zones
[15:00:05] <arussel> MANCHUCK so if I want to group data by date as seen in california, I need to do it 'manually' ?
[15:00:48] <arussel> maybe add 7 hours in millis to the timestamp ... (or 8 sometimes)
[15:00:49] <MANCHUCK> yes you would have to figure out the date outside of mongo
[15:01:01] <MANCHUCK> Derick, talks about this with his DateTime Talks
[15:01:29] <arussel> this breaks my pipeline to do it outside mongo
[15:02:09] <arussel> I'll try to send the diff between california and UTC in milis and adjust the timestamp so I can group by day inside mongo
[15:02:47] <MANCHUCK> keep in mind you will have the same issue with time stamps in general
[15:02:57] <MANCHUCK> they do not contain the TZ info
[15:03:49] <arussel> I acutally got the timezone in another field
[15:04:10] <LyndsySimon> arussel: careful with sending in a time delta from outside the DB - they aren’t constant. If your date range crosses a daylight savings changeover, you’ll have two deltas to deal with.
[15:04:39] <LyndsySimon> arussel: That’s why many databases don’t try to store datetimes with timezone information, and those that do often get it wrong. It’s an extremely complex issue.
[15:05:08] <Derick> as for timezones, I am about to start work on a proposal to get thta into MongoDB... any real life reasons why you'd need that, please email me at derick (at) mongodb /cc arussel
[15:05:42] <Derick> LyndsySimon: yes, so complex I wrote about about it (and implemented it in PHP!)
[15:07:09] <saml> client can convert to local time from UTC
[15:07:22] <LyndsySimon> rspijker: That’s how I handle it. I’m a Python dev, and I only deal with UTC on the serverside, period. Use moment.js or something else on the client side to translate them to user-facing strings.
[15:07:27] <saml> and client converts to UTC before sending to db
[15:07:42] <arussel> I've got timestamp in the db, but I need 'get me yesterday stats' as seen in california
[15:07:53] <LyndsySimon> When you think about it, timezone translation is actually presentation. The UTC time is the actual data.
[15:08:31] <LyndsySimon> arussel: Cool, then you should be able to easily figure a timestamp range in the application and pass that in instead of a date :)
[15:09:07] <arussel> LyndsySimon sure, but then I need more complex stuff and need to group by day
[15:09:48] <arussel> so group using day, year, month but this is utc and not pdt or whatever
[15:10:27] <arussel> I've already spent 2 hours just to get a date object out of a timestamp field :-)
[15:10:27] <Derick> you can do a project and add stuff to the date/timestamp
[15:11:02] <saml> arussel, give me example doc. and give me json that you want
[15:11:07] <arussel> yeah, pre-processing the data puting the date would be the best
[15:17:27] <LyndsySimon> I’m not a Mongo expert, unfortunately - but grouping by day seems like a problem that a simple query won’t solve, becuase you’re grouping by a loosely-defined, derived attribute of the document. I *think* the right approach there would be a map/reduce, but I hesitate to make a recommendation as I don’t think I’m yet qualified to do so.
[15:18:27] <rspijker> LyndsySimon: aggregation framework can do it, but is expects a date object and not a timestamp
[15:19:35] <LyndsySimon> without a timezone correction applied, a date object doesn’t seem to be what we’re really wanting to group by here.
[15:21:39] <rspijker> and how would map-reduce alleviate that concern?
[15:22:17] <Derick> rspijker: you can convert from timestamp to Date with it, and vv (IIRC)
[15:23:31] <arussel> Derick my use case is really statistics such as: I'm computing how many customer per day in a real life shop in san diego. I can't do that in mongo if I have only the timestamp
[15:24:15] <arussel> because if the shop is closed on saturday/sunday, I want 0 customer and not some leftover because of timezone error
[15:25:03] <rspijker> Derick: can’t you do that in aggregation as well?
[15:25:18] <arussel> (my use case is not a real shop, but a situation that if it would be very visible if there is a 7 hours shift in the computing of statistics)
[15:28:05] <rspijker> arussel: Either way, if you need to take it into account, take it into account… But it all starts with data, you need to have the information available and, preferrably, in a format that will make your live easier
[15:29:14] <arussel> rspijker very true, I need to do a bit more work in the processing of data before inserting into mongo
[15:37:33] <remlabm> looking for advice in creating a 'timeline' based schema.. similar to how a twitter would be setup.. but want to get a better idea on how collections should be broken up and if there are any mongodb helpers or best practices for this scenario
[15:42:06] <MANCHUCK> mongoid stores the time stamp
[15:42:24] <MANCHUCK> so really it is just insterting into a colleciton
[16:12:23] <djlee> Hi all, in MySQL you can specify a custom sort order for the results (so if i want the id's in the order of [1,5,10,2,100,3] i can specify that in the ordering clause. Does mongo support doing a similar sort of thing?
[16:12:47] <djlee> or do you have to order by a document attribute and thats it
[16:35:47] <kali> djlee: by a field value, and you want to have an index matching the filtering criteria and the sort order (in that order) if there are more than 1000 documents to order
[16:50:20] <garrilla> i'm creating an index on collection with 21^6 documents. The first 90% of the index took about 2 hours, the next 5% has taken about 3 hours and the current 1% is chugging away. There doen't seem to be much disk activity and ram use is very low and stable. Does this sound par for the course?
[17:21:41] <dgarstang> I could really use some help building RPM's for mongo. Very few resourses available.
[18:06:45] <hpekdemir> ok my goal is to have a centralized pool of information about several routers/switches. I use a perl script to fetch information via netconf protocol. I would like to fill a mongodb with that information
[18:06:58] <hpekdemir> fetching information will happen every minute.
[18:07:30] <hpekdemir> but since I don't have any control about icinga/nagios's scheduling queue, it could be that nagios wants to fetch the mongodb document informations
[18:07:40] <hpekdemir> and at the same time the perl script could write into that document.
[18:16:08] <saml> how much data do you write every minute? how much data do you read?
[18:16:24] <cheeser> well, you can't. those operations will be interleaved in form or another.
[18:22:19] <hpekdemir> saml: I fetch a bulk of information (returned in xml), process it, write to mongodb. and this every minute. it is about up to 100 entries
[18:31:18] <dgarstang> I am trying, in vain, to get rpm's for mongo 2.6 with ssl.
[18:31:29] <hpekdemir> I will start testing it. and later consider to use sharding
[18:31:42] <hpekdemir> or master/slave replicasets
[18:32:03] <dgarstang> The spec files in the github source don't work, and the non enterprise rpm's at http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/ don't have SSL
[18:32:19] <dgarstang> There's also no SRPM's at http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/
[18:32:52] <jareddlc> hey guys, i have a quick question setting up my conf file
[18:33:00] <jareddlc> an older version of mongo im using has the line
[18:33:06] <dgarstang> Been at this a week and I think the boss is starting to get irritated with me. :(
[18:36:36] <cheeser> you grep for that port primarily.
[18:36:43] <cheeser> probably an old mongo process running, though.
[18:37:39] <dgarstang> Well, I'm (in)conveniently located in Palo Alto. Gonna do to this today. http://www.meetup.com/MongoDB-SV-User-Group/events/177953032/ Maybe I'll get some answers there
[18:37:56] <jareddlc> ive tried looking for that port being open but no luck
[18:38:17] <jareddlc> anyways, thanks for the help. removing that line solved my problem. Im just not sure whats the properway to add an IP
[18:39:49] <saml> i have 50% of documents with modifiedDate field. 50% don't. how would I paginate the collection?
[18:40:09] <saml> should i add very old modifiedDate to all documents who don't have the field?
[18:41:27] <dgarstang> How about compiling 2.6 on CentOS? That fails with "scons: *** [build/linux2/64/ssl/mongo/mongoimport] Error 1"
[18:58:06] <dgarstang> Anyone know where I can get mongodb RPM's for version 2.6.x with SSL support?
[19:03:16] <radixo> hello guys.. I have just installed mongodb in my system... how can I setup user and password for login?
[19:05:21] <cheeser> dgarstang: that's an enterprise feature and not available in the RPMs provided via yum
[20:09:49] <KrzyStar> What's the exact difference between mongojs and mongodb native driver for node?
[20:10:20] <KrzyStar> From what I've read, mongojs wraps the latter, but what's the point? :p
[20:28:07] <jesswa> I have a query with $or statements inside of an $and - as soon as I add that $and, my queries stop using the indexes… Does anybody have a hint as to why nesting these might skip the indexes, or have a good reference on how mongo indexes $or inside of an $and?
[20:44:57] <saml> Aaron Staple planned to implement that in 2011 but then he quit
[20:46:47] <jesswa> @saml, have a compound index on three fields and trying to query on various combinations of those three fields (in correct order)… trying to rewrite the query as individual permutations of each $or… will post a pastebin if this doesn't work out :)
[20:47:31] <saml> i don't know.. just give us test case? few db.coll.inserts db.coll.ensureIndex.. and queries you're running
[20:54:28] <jesswa> @saml untested, but I'm doing something like this http://pastebin.com/VFM3eUYF
[20:55:12] <jesswa> after I rewrote the I had two $or statements inside of an $and, but changed it into 6 $or statements at the top level and explain() is showing that it is using the correct indexes now
[20:57:50] <jesswa> but that is the example that breaks
[20:58:14] <jesswa> the first query (line 7) will use the index, the second query (line 12) will not
[20:58:37] <jesswa> and the only difference is that '$and' operator
[20:59:13] <jesswa> so I was wondering why that would suddenly stop using the index?
[21:09:44] <jhuel> hi i have a problem building the C++ driver
[21:10:22] <jhuel> i get LINK : fatal error LNK1104: cannot open file 'boost_thread-vc110-mt-1_49.lib' when trying to call this scons $ARGS install-mongoclient --dynamic-windows --sharedclient
[21:11:15] <jhuel> but it builds right when i call scons $ARGS install-mongoclient or scons $ARGS install-mongoclient --dbg=on
[21:30:26] <ngl> Hi all. I'm trying to do a writeTo on the the server side, but I don't seem to have the File class... so I can't just: file.writeTo(new File("/tmp/smithco.pdf"));. How do I get the File class referenced in mongo docs?
[21:31:49] <ngl> I am getting nothing from searching for answers, so I'm here. :D
[21:37:37] <dgarstang> So... how does one load balancer two routers? :)
[22:13:08] <KrzyStar> Interesting... mongo shell segfaults immidiately after starting up
[22:13:18] <KrzyStar> Must be something wrong with my installation I guess..
[23:24:00] <tafryn> Does the c++ driver support upsert? I can't find it in the api.