[00:03:44] <mwolffhh> hi, someone got an idea how to query this? got a collection where *some* documents have a field "last_used", which is a datetime field. I want to select all documents where the field exists *and* has a date value earlier than specified.
[00:04:32] <mwolffhh> tried find({"last_used":{"$exists":true}, "last_used":{"$lte":{"sec":1392144183,"usec":0}}}), to no avail
[00:14:13] <erik508> that will only find documents where "last_used" exists, and matches the time comparison
[00:20:28] <txt23> How can I allow only a few IPs to connect to MongoDB instead of keeping it open to the web?
[00:21:02] <mwolffhh> erik508: yes, it works when I use a date object in my query - but what about that { "sec: 4711, "usec" 0 } variant? I use Doctrine ODM to generate queries and it automatically converts PHP DateTime objects to precisely that syntax
[00:25:20] <mwolffhh> txt23: according to this, you should probably use your OS's firewall and/or run MongoDB within a VPN: http://docs.mongodb.org/manual/core/security-network/
[00:25:49] <mwolffhh> txt23: but I am new to MongoDB, so there may be a better way
[00:26:24] <mwolffhh> txt23: but even if there is, it probably can't hurt to set this up as well as another line of defense
[00:26:26] <erik508> mwolffhh: hmm, not too familiar with Doctrine
[00:27:20] <mwolffhh> erik508: if I googled correctly, this seems to be similar to what the native MongoDate object does
[00:31:27] <txt23> mwolffhh: Yeah makes sense. I am going to have a replication server and for that I was thinking of adding only a few connections to allow list
[00:32:27] <txt23> You can run both mysql and mongo on the same server right?
[00:37:39] <txt23> I just installed mongo using this "yum install mongo-10gen mongo-10gen-server" but when I try to go into "mong" here is what I get "Tue Feb 11 19:34:34.002 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145"
[00:37:55] <txt23> Even "netstat -anp | grep 27017" doesnt show anything
[00:38:11] <txt23> And there is nothing in the log file
[00:38:47] <erik508> mwolffhh: those look like Timestamp types vs. ISODate
[00:39:05] <erik508> eg. new Timestamp(1392144183, 0)
[00:39:23] <erik508> if that's the case, you can do stuff like
[01:19:54] <txt23> joannac: I have MySQL and web server on the same machine. Would I not need NUMA for it?
[01:27:31] <txt23> joannac: I was able to finally launch it. using this " numactl --interleave=all mongod" however I'd like mongo to be running in tht background. How can I do that?
[02:49:30] <meltingwax> Does mongo's map reduce implementation do successive reductions for the same key? Such as reduce(key, [reduce(key, firstHalf), reduce(key, secondHalf)]) ?
[07:19:09] <txt23> I was able to install mongo on my server. However, as it seems its fully accesible from everywhere. What are few steps that I can do to secure the connection so only my local server and one ip has access to it?
[07:29:04] <guuha> I use uuid v4 in my app as _id. Now I can't decide on how to store these, 1: untouched, as String 2: remove dashes, BINARY_SUBTYPE_BYTE_ARRAY, 3: untouched: BINARY_SUBTYPE_UUID. My db is realtime so lots of deletes + inserts. Nothing will stay in permanently
[09:30:22] <ncls> when you run "mongo" on your server, you access this shell
[10:14:17] <tymohat> http://clip2net.com/s/6O20y4 mongo activity, how to investigate it?
[10:34:28] <_boot> Hi, I started draining a shard and it now says draining:true in the sharding status, however nothing seems to be happening with the number of chunks on any of the shards... using 2.5.4, is something wrong?
[11:40:14] <orweinberger> Is there a way to restore a db to a mongos in a way that will split the data to the shards instead of loading it all to one shard?
[11:47:58] <kees_> when i use a file output with a path like 'path => "test_%{+HH}"' the hour is one off; i'm in GMT+1, so right now i'd expect a file called test_12, but i get a file called test_11 - what is the best way to get the right time?
[11:52:16] <Batmandakh> Hello everyone! So, I'm thinking of use laravel php framework for a project
[11:52:29] <Batmandakh> and I want to implement MongoDB on Laravel
[11:52:44] <Batmandakh> can anyone suggest me a mongodb odm for laravel framework?
[11:53:00] <Batmandakh> I saw https://github.com/boparaiamrit/Laravel-Mandango from http://docs.mongodb.org/ecosystem/drivers/php-libraries/
[11:53:02] <Derick> Batmandakh: I would suggest you try without ODM first, especially if you have never used MongoDB
[11:53:47] <Batmandakh> without? So, how can I start without ODM in laravel?
[11:54:03] <Derick> Use the PHP driver directly? I am sure you can write models in Laravel?
[11:54:07] <Batmandakh> How do you use laravel framework? I'm new to laravel too....
[11:54:17] <Derick> I don't use (or know Laravel).
[13:34:40] <tscanausa> so the problem I am having is when I try to create a replica set the first node always starts off my using the short hostname say "mongo1" and when I add other nodes to the replica set it it does not know how to resolve "mongo1" via dns so the replica set never starts
[13:35:10] <tscanausa> by never starts the secondaries never join
[13:35:17] <Derick> tscanausa: you should be able to fix the replicaset set config
[13:35:22] <Derick> you can just modify it yourself
[13:37:01] <tscanausa> Derick: so your saying the only way to set/change a hostname in mongo is to log into the mongo shell and modify the config.
[13:37:27] <tscanausa> there is no way to set it on boot.
[13:37:38] <kees_> on boot it will read that config
[13:37:50] <Derick> well, an alternative is to only use DNS names and make sure all the nodes' names can be resolved on all nodes
[13:37:59] <Derick> either through DNS, or /etc/hosts
[13:40:25] <tscanausa> I have dns and hostnames set up for all the nodes like mongo1.example.com, mongo2.example.com but when the first node in a replica set only uses "mongo1" as its hostname.
[13:43:23] <tscanausa> Do you think it is far fetched to be able to tell mongod what hostname it should use on start?
[13:44:52] <Derick> tscanausa: yes, it's a system configuration thing
[13:46:10] <mierst> hmm, sounds like a feature request
[13:48:16] <tscanausa> so if I were to start a mongo cluster on aws and used cnames on my dns to aws's dns I would then have to update all of my /etc/hosts and hostnames on all servers participating in the cluster? ( this not what I am doing but an example of were setting the hostname would be useful )
[13:52:17] <Derick> tscanausa: you should use the DNS assigned names
[13:52:24] <Derick> no need to use /etc/hosts if you use the real DNS names
[14:00:18] <tscanausa> Which DNS names would be used?
[14:36:34] <amitprakash> Hi, for a relational model with tables A, B, C such that A(b=FK(B)), C(id=(pkey(A)|pkey(B)) what would be a recommended mongo document model?
[14:37:36] <Nodex> if you're application really does need that level of depth then Mongodb is probably not the right choice for your app
[14:46:44] <tscanausa> amitprakash: depends on your needs. If c is a dependent of b it would be nice to know how many pieces of b are complete in, and then you can do the same for A, how many pieces of A (Bs) are needed to complete A. but it all depends on your requirements.
[14:47:58] <amitprakash> tscanausa, a simple enough use case of a shipment being moved around, shipments are aggregated into bags, bags are aggregated into dispatches..
[14:48:37] <amitprakash> The issue with subdocument is that since multiple As lie in the same C, the sub-document would be replicated entirely across multiple documents in A
[14:50:26] <amitprakash> consider some update I wish to make to C, I'd be updating multiple documents, add to this the index cost, as a separate document, the index size would be much smaller for the sub-document than indexing keys on the larger meta document?
[14:51:29] <amitprakash> No, A is package, B is Bag, C is dispatch, status is a separate sub-document
[14:51:45] <amitprakash> Or if we wish to simplify it, theres package, dispatches and statuses
[14:51:57] <tscanausa> amitprakash: maybe you can reverse it, C is the parent of b and a
[14:52:28] <Nodex> a DOCUMENT is the order. Everything sub of that is somehting that happens to the ORDER
[14:53:03] <amitprakash> Let me rephrase, currently, a status table S contains entries against instances of A or B. B otoh is a parent of A
[14:53:21] <amitprakash> there is no FK relationship b/w S and A/B however
[14:54:06] <Nodex> ok I'll give you the best piece of advice for working with Mongodb. Forget EVERYTHING you know about relational databases because it does not apply
[14:56:32] <amitprakash> Now the next bit, how do I find out which items of A belong to B .. well currently B.a_key = [A._id]
[14:57:33] <Nodex> why are A and B related ?, if they're an order then they're two separate entities
[14:58:10] <amitprakash> B however expires in usage at certain time, at which point A is freed up.. However removing A._id from B.a_key isnt acceptable since I want to also know what items of type A used to belong to an instance of B
[14:58:30] <amitprakash> B is a set of items of type A, hence they're related
[14:59:16] <amitprakash> To solve this, I doubly stored A.parent = B._id for some B
[14:59:58] <Nodex> I'm going to simplify this for you. foo, bar, baz <--- all orders from a store, foo gets shipped and thus has an entry in a nested document with a log of this, the same happens to bar anx baz.
[15:04:19] <Nodex> but you can add the Id's (which I DONT propose)
[15:04:29] <amitprakash> Which is what I am doing as said previously
[15:04:44] <amitprakash> B.a_key = [A._id] where B is the COLLECTION and A is the order
[15:06:08] <Nodex> I'll simplify it even further. Books, Boxes, Containers. Books get sold and get put into boxes (nested), "Boxes" get copied into "Containers". Book sales are stored in a collection called (for brevity) "sales".
[15:06:56] <Nodex> Sales get udpated with what happens to them (the logs you discussed). when the "Sale" gets boxed up ready for shipping you copy the entire document into a sub document of "Boxes"7
[15:07:13] <Nodex> when the "Boxes" get packed into the container you do the same.
[15:07:25] <amitprakash> Instead of copying the entire document into a subdoc I am copying it as ids
[15:07:29] <Nodex> (into the "Containers" collection
[15:07:45] <Nodex> that's 3 joins when you don't need any
[15:07:53] <amitprakash> so Containers.subdoc_ids = list(Sales.id)
[15:08:57] <Nodex> Containers -> "a_container_document_id" has a list of all boxes. Inside those boxes are "books"
[15:09:53] <Nodex> in this way you can pull any container and see it's contents. any box and see it's books and any book to see what happened to it
[15:11:59] <Nodex> on the off chance that you have to update any one book then the update requires two updates (one to the sale and one to the boxes document)
[15:12:02] <amitprakash> If so, then instead of storing an instances of Books as subdocument in Box, I am storing the list of id of Book in the Box itself
[15:12:22] <Nodex> but the data is historicl and will proably never change
[15:12:38] <amitprakash> Actually, the data changes all the time
[15:12:40] <Nodex> storing the id requires extra queries and is not needed
[15:12:58] <Nodex> is it writen more than it's read?
[15:20:02] <amitprakash> We look up books by _id mostly, but most of the financial work happens over the type of book, the distance it moved, the priortiy of it etc
[15:20:34] <amitprakash> So the lookups would be on boxes against box.book.distance, box.book.priority, box.book.type
[15:20:34] <Nodex> there is no silver bullet. you can easily serach all boxes for books of a title but it obviously needs an index. IF you just store the _id of the book in the box then you need to do at least ONE more uery
[15:33:38] <remonvv> quick question; can anyone explain the following oddity; we're running update({a:1, b:1}, {$inc:{c:1}}, true, false) operations quite a bit. This always resulted in the expected result (only one document with {a:1, b:1} exists and an ever increasing counter for "c". However, we've just found an occurance where we have 6 documents with {a:1, b:1}. In other words, it seems to violate the atomicity guarantee.
[15:34:00] <remonvv> Can anyone think of a possible scenario where this would happen? It's a non sharded collection, through mongos.
[15:34:30] <remonvv> Documents generated by the same machine and process (the middle 6 bytes of the ObjectId) in a timespan of ~1 second.
[15:43:24] <fabio_c> I have a question: it is possible in mongodb get a field of a referenced object? For example, I need to select all users in my db, each user has a reference to a dogs collection but I need just the dog's name.
[15:43:51] <Derick> fabio_c: you need to do two queries
[15:44:17] <Nodex> fabio_c: relations are not supported
[15:44:30] <fabio_c> Derick, I needed to do this in mongohub to export some records
[15:45:20] <remonvv> Derick: Has this changed? I distinctly recall discussing this with someone here way back when and he said that all update operations have the "find" part of the update inside the write lock.
[15:45:50] <Derick> remonvv: hmm, I don't know - perhaps it'd be good to write to the mongodb-dev list about it?
[15:46:06] <Derick> it's sometimes tricky to get an authorative answer on this
[15:46:57] <remonvv> Yes. If it always worked like this it's just a new bit of documentation but if the behaviour changed there are backward compatibility problems.
[15:47:30] <Derick> remonvv: can you send me an email about it? I can send it around asking people's advice then
[15:52:14] <Derick> I got another bottle yesterday. Win.
[15:52:29] <Derick> it's handy to have an amazon wishlist for them
[16:10:57] <Joeskyyy> Anyone fancy with the perl driver and replsets by chance? :D
[17:42:52] <atmo> hello, I'd like to create separate objects based on a 2nd level child of a document, but it seems that .forEach can only be used on collections?
[17:43:35] <atmo> i would like to convert each of db.collection.team.players.playerA...Z to individual playerA...Z documents
[17:45:09] <atmo> http://pastebin.com/jTcpBZzw at line4 i get the error that (players) has no method foreach
[18:17:47] <Joeskyyy> atmo: you mean something like this: db.test.find().forEach(function(t){for (var p in t.team.players) print (p)})
[18:24:56] <atmo> i would like to create new objects from each of players.*
[18:25:48] <atmo> the key value of each players child can be discarded
[18:27:18] <ghanima> I have a 8core 124GB mongodb slaves that is primarily read heavy database(with very very very little writes) All queries seem to fit into memory very well... However from a performance standpoint my LOADavg spikes above the amount of cores that I have on the machine by 800 fold.. I mean load averages as high as 300.... However all my clients so far seem to be getting queries results in a reasonable ammount of time and are not seeing
[18:27:24] <ghanima> when connecting to mongodb server
[18:28:01] <ghanima> The only thing that I can see(still digging into this) is that RUN queue has moments of burst like 30 to 150 processes in the r state
[18:28:10] <Joeskyyy> p.Name wouldn't work since you would technically need to call (as an example) t.players.medic.name
[18:28:16] <ghanima> outside of that this is the only thing that I am seeing that I can attribute to the spike
[19:51:51] <hahuang65> Joeskyyy: what do you mean that's not how you use or? You mean it's syntactically incorrect or are you trying to assume what I'm trying to do?
[19:53:45] <hahuang65> Joeskyyy: a rubygem is generating this query for me
[19:53:54] <hahuang65> Now, I want to know, since the $or is obviously useless here
[19:54:16] <hahuang65> does Mongo take care of that for me, or do I have to remove thie $or clause from each query in my app to not have a slowdown in my queries
[19:54:58] <hahuang65> $or clause has 1 condition in it, which means it's the same as doing {id: 1, id: {$in: [1, 2, 3]}}, which is the same as {id: 1}
[19:55:27] <hahuang65> this is clearly a bug in the gem, but I'm wondering if I have to handle it myself, or if mongo will see that there's a way to optimize the query down to {id: 1}
[19:55:49] <Joeskyyy> Mongo will try to find the shortest path it possible can take.
[21:01:32] <hahuang65> mst1228: it's unlikely that this will fit your use case, but there's always a slight chance You can check out capped collections
[21:02:25] <hahuang65> mst1228: i'm not sure whether you can cap a collection based on a field or not.
[21:22:08] <hahuang65> cheeser: it was a reach, but it was just a suggestion. I don't know what his use case may be, but it MAY (on an off chance) fit the use case into capped collections
[21:22:23] <cheeser> 16:01 < cheeser> collections are capped on size in bytes or doc count
[21:22:31] <cheeser> yes. doc count. i know. and said so.
[21:22:54] <hahuang65> cheeser: I got confused and thought it was a question, my bad.
[21:48:55] <tongcx> hi guys, is it possible to do .find({[field1, field2] : [1,3]})?
[21:50:56] <NoirSoldats> I'm going to do a horrible job explaining this problem, but hang with me. We have a document with an embedded array of 'certifications', when we remove one and persist the entire document again the removed certification is still there. We are using Symfony2 and the Doctrine ODM.
[21:57:48] <tongcx> this room seems not very active
[21:58:22] <cheeser> NoirSoldats: consider using $pull
[21:58:31] <cheeser> tongcx: when you tried what happened?
[21:58:45] <cheeser> but no, that is not the syntax for find()
[22:00:31] <tongcx> cheeser: yes, it doesn't allow that
[22:00:49] <tongcx> cheeser: so I want to do '$within : box'
[22:00:58] <tongcx> cheeser: but my lat and lng are two fields instead of one
[22:22:31] <NoirSoldats> So, here's what seems to be the problem... We have a set of certifications "certifications" : [ {"name" : "Cert 1", "desc" : "Cert 1 Desc" },{"name" : "Cert 2", "desc" : "Cert 2 Desc" }] and then we want to empty that up so we send "certifications" : [] but when we pull that document back up it still has the original certs.
[22:24:02] <joannac> tongcx: rearrange? in what sense?
[22:24:43] <joannac> NoirSoldats "send" is not a valid word. What do you actually mean? update? what's the exact update statement?
[22:28:31] <NoirSoldats> joannac: We don' thave the exact update statement, we're using the Doctrine ODM.. Let me see if there's an equivilent to getSQLQuery()
[22:35:56] <freeone3000> I'm getting a "WARNING: Server seen down: 10.0.1.6:27017" from my driver when `mongo --host 10.0.1.6 --port 27017` works just fine. Any suggestions?
[22:36:38] <freeone3000> That particular server is a mongos instance, and each shard is a replication set, but currently every server is visible using 10.0.1.0/24 to the host.
[22:36:43] <freeone3000> joannac: I'd agree, if it wasn't repeatable.
[22:37:19] <joannac> you testing on the same server as whereever your code (suing the driver) is?
[22:43:57] <freeone3000> joannac: How should I check this from mongo console?
[22:44:43] <joannac> connect to a node from each replset and run rs.status()
[22:45:01] <NoirSoldats> joannac: We're using doctrine's ->persist() followed by a ->flush().. I'm starting to think the problem lies in there..
[22:45:51] <freeone3000> joannac: Gives me https://gist.github.com/freeone3000/8c5fff3aff25faa4e0ba , which looks right to me.
[22:46:59] <freeone3000> joannac: 10.0.1.6 is mongos, 10.0.1.7 and 10.0.1.8 are members, 10.0.1.6 is also an arbiter on a separate port in a separate process. We have a PRIMARY, a SECONDARY, an ARBITER, all three config servers, I don't see why the driver would cause a failure.
[22:47:26] <joannac> freeone3000: you only have one shard?
[22:47:40] <freeone3000> joannac: Yeah, this is supposed to prove the architecture works with our services.
[22:47:55] <freeone3000> joannac: So we have a sharded server with one shard.
[22:50:36] <joannac> what's your connection string?
[22:52:58] <freeone3000> joannac: new MongoClient(Lists.newArrayList(new ServerAddress("10.0.1.6"), new ServerAddress("10.0.1.6")), new MongoClientOptions.Builder().readPreference(ReadPreference.primaryPreferred()).writeConcern(WriteConcern.UNACKNOWLEDGED).build()));
[22:53:28] <joannac> why are you initiating with a list with 2 identical IPs?
[22:53:57] <freeone3000> joannac: Because I only have one mongos server and the code that gets to this point expects two.
[22:54:30] <joannac> 10.0.1.6:27017 is your mongoS?
[22:56:56] <freeone3000> joannac: And 10.0.1.7 can see 10.0.1.8, and 10.0.1.8 can see 10.0.1.7 and 10.0.1.6, and they can all see 10.0.1.6 on a different port, too, and moreover, they can all see 10.0.1.243, which is where the application happens to be running, and it can see all of them.
[22:57:27] <freeone3000> joannac: Verified with `mongo` and `tcping` every step of the way.
[22:58:57] <freeone3000> joannac: Every single time.
[22:59:37] <joannac> what version of mongodb and driver?
[23:01:30] <freeone3000> joannac: mongos is running 2.4.9, each mongod instance on the shard is 2.4.8 (except for the arbiter, who is 2.4.9); driver version is ... 2.12.0-COMPANYNAME
[23:43:34] <freeone3000> new version of driver, new error. Version 2.11.3 gives "com.mongodb.MongoException: can't find a master". Same client. Still all reachable. rs.status() is still okay. sh.status() is still okay. `mongo` still works.
[23:47:37] <freeone3000> Running 2.4.8, there should be no problem specifying two members of a replicaset in the host for a shard, correct?